00:00:00.002 Started by upstream project "autotest-per-patch" build number 132804 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.091 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.092 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.094 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.131 Fetching changes from the remote Git repository 00:00:00.133 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.164 Using shallow fetch with depth 1 00:00:00.164 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.164 > git --version # timeout=10 00:00:00.193 > git --version # 'git version 2.39.2' 00:00:00.193 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.209 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.209 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.112 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.123 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.136 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.136 > git config core.sparsecheckout # timeout=10 00:00:05.147 > git read-tree -mu HEAD # timeout=10 00:00:05.162 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.184 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.184 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.274 [Pipeline] Start of Pipeline 00:00:05.285 [Pipeline] library 00:00:05.287 Loading library shm_lib@master 00:00:05.287 Library shm_lib@master is cached. Copying from home. 00:00:05.301 [Pipeline] node 00:00:05.309 Running on WFP3 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.310 [Pipeline] { 00:00:05.321 [Pipeline] catchError 00:00:05.322 [Pipeline] { 00:00:05.332 [Pipeline] wrap 00:00:05.351 [Pipeline] { 00:00:05.356 [Pipeline] stage 00:00:05.358 [Pipeline] { (Prologue) 00:00:05.585 [Pipeline] sh 00:00:05.872 + logger -p user.info -t JENKINS-CI 00:00:05.891 [Pipeline] echo 00:00:05.893 Node: WFP3 00:00:05.899 [Pipeline] sh 00:00:06.196 [Pipeline] setCustomBuildProperty 00:00:06.205 [Pipeline] echo 00:00:06.206 Cleanup processes 00:00:06.209 [Pipeline] sh 00:00:06.491 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.491 1720876 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.504 [Pipeline] sh 00:00:06.802 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.802 ++ grep -v 'sudo pgrep' 00:00:06.802 ++ awk '{print $1}' 00:00:06.802 + sudo kill -9 00:00:06.802 + true 00:00:06.817 [Pipeline] cleanWs 00:00:06.827 [WS-CLEANUP] Deleting project workspace... 00:00:06.827 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.834 [WS-CLEANUP] done 00:00:06.838 [Pipeline] setCustomBuildProperty 00:00:06.853 [Pipeline] sh 00:00:07.135 + sudo git config --global --replace-all safe.directory '*' 00:00:07.238 [Pipeline] httpRequest 00:00:07.576 [Pipeline] echo 00:00:07.578 Sorcerer 10.211.164.112 is alive 00:00:07.585 [Pipeline] retry 00:00:07.587 [Pipeline] { 00:00:07.597 [Pipeline] httpRequest 00:00:07.600 HttpMethod: GET 00:00:07.601 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.601 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.629 Response Code: HTTP/1.1 200 OK 00:00:08.629 Success: Status code 200 is in the accepted range: 200,404 00:00:08.630 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:29.889 [Pipeline] } 00:00:29.905 [Pipeline] // retry 00:00:29.911 [Pipeline] sh 00:00:30.194 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:30.207 [Pipeline] httpRequest 00:00:30.618 [Pipeline] echo 00:00:30.620 Sorcerer 10.211.164.112 is alive 00:00:30.628 [Pipeline] retry 00:00:30.631 [Pipeline] { 00:00:30.646 [Pipeline] httpRequest 00:00:30.650 HttpMethod: GET 00:00:30.651 URL: http://10.211.164.112/packages/spdk_b8248e28c89c09106c84e7622ffae26b1edceaab.tar.gz 00:00:30.652 Sending request to url: http://10.211.164.112/packages/spdk_b8248e28c89c09106c84e7622ffae26b1edceaab.tar.gz 00:00:30.663 Response Code: HTTP/1.1 200 OK 00:00:30.663 Success: Status code 200 is in the accepted range: 200,404 00:00:30.664 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b8248e28c89c09106c84e7622ffae26b1edceaab.tar.gz 00:01:40.100 [Pipeline] } 00:01:40.112 [Pipeline] // retry 00:01:40.116 [Pipeline] sh 00:01:40.396 + tar --no-same-owner -xf spdk_b8248e28c89c09106c84e7622ffae26b1edceaab.tar.gz 00:01:42.939 [Pipeline] sh 00:01:43.223 + git -C spdk log --oneline -n5 00:01:43.223 b8248e28c test/check_so_deps: use VERSION to look for prior tags 00:01:43.223 805149865 build: use VERSION file for storing version 00:01:43.223 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:01:43.223 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:01:43.224 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:01:43.234 [Pipeline] } 00:01:43.248 [Pipeline] // stage 00:01:43.257 [Pipeline] stage 00:01:43.259 [Pipeline] { (Prepare) 00:01:43.277 [Pipeline] writeFile 00:01:43.296 [Pipeline] sh 00:01:43.581 + logger -p user.info -t JENKINS-CI 00:01:43.594 [Pipeline] sh 00:01:43.877 + logger -p user.info -t JENKINS-CI 00:01:43.889 [Pipeline] sh 00:01:44.172 + cat autorun-spdk.conf 00:01:44.172 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.172 SPDK_TEST_NVMF=1 00:01:44.172 SPDK_TEST_NVME_CLI=1 00:01:44.172 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:44.172 SPDK_TEST_NVMF_NICS=e810 00:01:44.172 SPDK_TEST_VFIOUSER=1 00:01:44.172 SPDK_RUN_UBSAN=1 00:01:44.172 NET_TYPE=phy 00:01:44.182 RUN_NIGHTLY=0 00:01:44.194 [Pipeline] readFile 00:01:44.232 [Pipeline] withEnv 00:01:44.233 [Pipeline] { 00:01:44.241 [Pipeline] sh 00:01:44.521 + set -ex 00:01:44.521 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:44.521 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:44.521 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.521 ++ SPDK_TEST_NVMF=1 00:01:44.521 ++ SPDK_TEST_NVME_CLI=1 00:01:44.521 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:44.521 ++ SPDK_TEST_NVMF_NICS=e810 00:01:44.521 ++ SPDK_TEST_VFIOUSER=1 00:01:44.521 ++ SPDK_RUN_UBSAN=1 00:01:44.521 ++ NET_TYPE=phy 00:01:44.521 ++ RUN_NIGHTLY=0 00:01:44.521 + case $SPDK_TEST_NVMF_NICS in 00:01:44.521 + DRIVERS=ice 00:01:44.521 + [[ tcp == \r\d\m\a ]] 00:01:44.521 + [[ -n ice ]] 00:01:44.521 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:44.521 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:44.521 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:44.521 rmmod: ERROR: Module i40iw is not currently loaded 00:01:44.521 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:44.521 + true 00:01:44.521 + for D in $DRIVERS 00:01:44.521 + sudo modprobe ice 00:01:44.521 + exit 0 00:01:44.531 [Pipeline] } 00:01:44.547 [Pipeline] // withEnv 00:01:44.553 [Pipeline] } 00:01:44.567 [Pipeline] // stage 00:01:44.577 [Pipeline] catchError 00:01:44.579 [Pipeline] { 00:01:44.594 [Pipeline] timeout 00:01:44.595 Timeout set to expire in 1 hr 0 min 00:01:44.597 [Pipeline] { 00:01:44.611 [Pipeline] stage 00:01:44.613 [Pipeline] { (Tests) 00:01:44.627 [Pipeline] sh 00:01:44.912 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.912 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.912 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.912 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:44.912 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:44.912 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:44.912 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:44.912 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:44.912 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:44.912 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:44.912 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:44.912 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.912 + source /etc/os-release 00:01:44.912 ++ NAME='Fedora Linux' 00:01:44.912 ++ VERSION='39 (Cloud Edition)' 00:01:44.912 ++ ID=fedora 00:01:44.912 ++ VERSION_ID=39 00:01:44.912 ++ VERSION_CODENAME= 00:01:44.912 ++ PLATFORM_ID=platform:f39 00:01:44.912 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:44.912 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:44.912 ++ LOGO=fedora-logo-icon 00:01:44.912 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:44.912 ++ HOME_URL=https://fedoraproject.org/ 00:01:44.912 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:44.912 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:44.912 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:44.912 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:44.912 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:44.912 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:44.912 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:44.912 ++ SUPPORT_END=2024-11-12 00:01:44.912 ++ VARIANT='Cloud Edition' 00:01:44.912 ++ VARIANT_ID=cloud 00:01:44.912 + uname -a 00:01:44.912 Linux spdk-wfp-03 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:44.912 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:47.451 Hugepages 00:01:47.451 node hugesize free / total 00:01:47.451 node0 1048576kB 0 / 0 00:01:47.451 node0 2048kB 0 / 0 00:01:47.451 node1 1048576kB 0 / 0 00:01:47.451 node1 2048kB 0 / 0 00:01:47.451 00:01:47.451 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:47.451 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:47.451 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:47.451 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:47.451 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:47.451 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:47.451 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:47.451 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:47.451 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:47.710 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:47.710 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:01:47.710 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:47.710 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:47.710 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:47.710 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:47.710 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:47.710 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:47.710 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:47.710 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:47.710 + rm -f /tmp/spdk-ld-path 00:01:47.710 + source autorun-spdk.conf 00:01:47.710 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.710 ++ SPDK_TEST_NVMF=1 00:01:47.710 ++ SPDK_TEST_NVME_CLI=1 00:01:47.710 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.710 ++ SPDK_TEST_NVMF_NICS=e810 00:01:47.710 ++ SPDK_TEST_VFIOUSER=1 00:01:47.710 ++ SPDK_RUN_UBSAN=1 00:01:47.710 ++ NET_TYPE=phy 00:01:47.710 ++ RUN_NIGHTLY=0 00:01:47.710 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:47.710 + [[ -n '' ]] 00:01:47.710 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:47.710 + for M in /var/spdk/build-*-manifest.txt 00:01:47.710 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:47.710 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:47.710 + for M in /var/spdk/build-*-manifest.txt 00:01:47.710 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:47.710 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:47.710 + for M in /var/spdk/build-*-manifest.txt 00:01:47.710 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:47.710 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:47.710 ++ uname 00:01:47.710 + [[ Linux == \L\i\n\u\x ]] 00:01:47.710 + sudo dmesg -T 00:01:47.710 + sudo dmesg --clear 00:01:47.970 + dmesg_pid=1722262 00:01:47.970 + [[ Fedora Linux == FreeBSD ]] 00:01:47.970 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.970 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.970 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:47.970 + [[ -x /usr/src/fio-static/fio ]] 00:01:47.970 + export FIO_BIN=/usr/src/fio-static/fio 00:01:47.970 + FIO_BIN=/usr/src/fio-static/fio 00:01:47.970 + sudo dmesg -Tw 00:01:47.970 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:47.970 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:47.970 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:47.970 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.970 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.970 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:47.970 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.970 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.970 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:47.970 15:34:43 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:47.970 15:34:43 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:47.970 15:34:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.970 15:34:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:47.970 15:34:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:47.970 15:34:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.970 15:34:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:47.970 15:34:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:47.970 15:34:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:47.970 15:34:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:47.970 15:34:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:47.970 15:34:43 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:47.970 15:34:43 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:47.970 15:34:43 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:47.971 15:34:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:47.971 15:34:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:47.971 15:34:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:47.971 15:34:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:47.971 15:34:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:47.971 15:34:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.971 15:34:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.971 15:34:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.971 15:34:43 -- paths/export.sh@5 -- $ export PATH 00:01:47.971 15:34:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.971 15:34:43 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:47.971 15:34:43 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:47.971 Traceback (most recent call last): 00:01:47.971 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py", line 24, in 00:01:47.971 import spdk.rpc as rpc # noqa 00:01:47.971 ^^^^^^^^^^^^^^^^^^^^^^ 00:01:47.971 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python/spdk/__init__.py", line 5, in 00:01:47.971 from .version import __version__ 00:01:47.971 ModuleNotFoundError: No module named 'spdk.version' 00:01:47.971 15:34:43 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733754883.XXXXXX 00:01:47.971 15:34:43 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733754883.NY5DEy 00:01:47.971 15:34:43 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:47.971 15:34:43 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:47.971 15:34:43 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:47.971 15:34:43 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:47.971 15:34:43 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:47.971 15:34:43 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:47.971 15:34:43 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:47.971 15:34:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.971 15:34:43 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:47.971 15:34:43 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:47.971 15:34:43 -- pm/common@17 -- $ local monitor 00:01:47.971 15:34:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.971 15:34:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.971 15:34:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.971 15:34:43 -- pm/common@21 -- $ date +%s 00:01:47.971 15:34:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.971 15:34:43 -- pm/common@21 -- $ date +%s 00:01:47.971 15:34:43 -- pm/common@25 -- $ sleep 1 00:01:47.971 15:34:43 -- pm/common@21 -- $ date +%s 00:01:47.971 15:34:43 -- pm/common@21 -- $ date +%s 00:01:47.971 15:34:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733754883 00:01:47.971 15:34:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733754883 00:01:47.971 15:34:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733754883 00:01:47.971 15:34:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733754883 00:01:47.971 Traceback (most recent call last): 00:01:47.971 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py", line 24, in 00:01:47.971 import spdk.rpc as rpc # noqa 00:01:47.971 ^^^^^^^^^^^^^^^^^^^^^^ 00:01:47.971 File "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python/spdk/__init__.py", line 5, in 00:01:47.971 from .version import __version__ 00:01:47.971 ModuleNotFoundError: No module named 'spdk.version' 00:01:47.971 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733754883_collect-vmstat.pm.log 00:01:47.971 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733754883_collect-cpu-temp.pm.log 00:01:47.971 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733754883_collect-cpu-load.pm.log 00:01:47.971 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733754883_collect-bmc-pm.bmc.pm.log 00:01:48.910 15:34:44 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:48.910 15:34:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:48.910 15:34:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:48.910 15:34:44 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:48.910 15:34:44 -- spdk/autobuild.sh@16 -- $ date -u 00:01:48.910 Mon Dec 9 02:34:44 PM UTC 2024 00:01:48.910 15:34:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:49.169 v25.01-pre-305-gb8248e28c 00:01:49.169 15:34:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:49.169 15:34:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:49.169 15:34:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:49.169 15:34:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:49.169 15:34:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:49.169 15:34:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.169 ************************************ 00:01:49.169 START TEST ubsan 00:01:49.169 ************************************ 00:01:49.169 15:34:44 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:49.169 using ubsan 00:01:49.169 00:01:49.169 real 0m0.000s 00:01:49.169 user 0m0.000s 00:01:49.169 sys 0m0.000s 00:01:49.169 15:34:44 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:49.169 15:34:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:49.169 ************************************ 00:01:49.169 END TEST ubsan 00:01:49.169 ************************************ 00:01:49.169 15:34:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:49.169 15:34:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:49.169 15:34:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:49.169 15:34:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:49.169 15:34:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:49.169 15:34:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:49.169 15:34:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:49.169 15:34:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:49.169 15:34:44 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:49.169 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:49.169 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:49.736 Using 'verbs' RDMA provider 00:02:02.504 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:14.707 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:14.707 Creating mk/config.mk...done. 00:02:14.707 Creating mk/cc.flags.mk...done. 00:02:14.707 Type 'make' to build. 00:02:14.707 15:35:09 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:14.707 15:35:09 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:14.707 15:35:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:14.707 15:35:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.707 ************************************ 00:02:14.707 START TEST make 00:02:14.707 ************************************ 00:02:14.707 15:35:09 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:16.615 The Meson build system 00:02:16.615 Version: 1.5.0 00:02:16.615 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:16.615 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:16.615 Build type: native build 00:02:16.615 Project name: libvfio-user 00:02:16.615 Project version: 0.0.1 00:02:16.615 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:16.615 C linker for the host machine: cc ld.bfd 2.40-14 00:02:16.615 Host machine cpu family: x86_64 00:02:16.615 Host machine cpu: x86_64 00:02:16.615 Run-time dependency threads found: YES 00:02:16.615 Library dl found: YES 00:02:16.615 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:16.615 Run-time dependency json-c found: YES 0.17 00:02:16.615 Run-time dependency cmocka found: YES 1.1.7 00:02:16.615 Program pytest-3 found: NO 00:02:16.615 Program flake8 found: NO 00:02:16.615 Program misspell-fixer found: NO 00:02:16.615 Program restructuredtext-lint found: NO 00:02:16.615 Program valgrind found: YES (/usr/bin/valgrind) 00:02:16.615 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:16.615 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:16.615 Compiler for C supports arguments -Wwrite-strings: YES 00:02:16.615 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:16.615 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:16.615 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:16.615 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:16.615 Build targets in project: 8 00:02:16.615 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:16.615 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:16.615 00:02:16.615 libvfio-user 0.0.1 00:02:16.615 00:02:16.615 User defined options 00:02:16.615 buildtype : debug 00:02:16.615 default_library: shared 00:02:16.615 libdir : /usr/local/lib 00:02:16.615 00:02:16.615 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:17.181 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:17.181 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:17.181 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:17.181 [3/37] Compiling C object samples/null.p/null.c.o 00:02:17.181 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:17.439 [5/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:17.439 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:17.439 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:17.439 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:17.439 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:17.439 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:17.439 [11/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:17.439 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:17.439 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:17.439 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:17.439 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:17.439 [16/37] Compiling C object samples/server.p/server.c.o 00:02:17.439 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:17.439 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:17.439 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:17.439 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:17.439 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:17.439 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:17.439 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:17.439 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:17.439 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:17.439 [26/37] Compiling C object samples/client.p/client.c.o 00:02:17.439 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:17.439 [28/37] Linking target samples/client 00:02:17.439 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:17.439 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:17.439 [31/37] Linking target test/unit_tests 00:02:17.697 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:17.697 [33/37] Linking target samples/shadow_ioeventfd_server 00:02:17.697 [34/37] Linking target samples/gpio-pci-idio-16 00:02:17.697 [35/37] Linking target samples/server 00:02:17.697 [36/37] Linking target samples/lspci 00:02:17.697 [37/37] Linking target samples/null 00:02:17.697 INFO: autodetecting backend as ninja 00:02:17.697 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:17.697 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:18.263 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:18.263 ninja: no work to do. 00:02:23.539 The Meson build system 00:02:23.539 Version: 1.5.0 00:02:23.539 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:23.539 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:23.539 Build type: native build 00:02:23.539 Program cat found: YES (/usr/bin/cat) 00:02:23.539 Project name: DPDK 00:02:23.539 Project version: 24.03.0 00:02:23.539 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:23.539 C linker for the host machine: cc ld.bfd 2.40-14 00:02:23.539 Host machine cpu family: x86_64 00:02:23.539 Host machine cpu: x86_64 00:02:23.539 Message: ## Building in Developer Mode ## 00:02:23.539 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:23.539 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:23.539 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:23.539 Program python3 found: YES (/usr/bin/python3) 00:02:23.539 Program cat found: YES (/usr/bin/cat) 00:02:23.539 Compiler for C supports arguments -march=native: YES 00:02:23.540 Checking for size of "void *" : 8 00:02:23.540 Checking for size of "void *" : 8 (cached) 00:02:23.540 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:23.540 Library m found: YES 00:02:23.540 Library numa found: YES 00:02:23.540 Has header "numaif.h" : YES 00:02:23.540 Library fdt found: NO 00:02:23.540 Library execinfo found: NO 00:02:23.540 Has header "execinfo.h" : YES 00:02:23.540 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:23.540 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:23.540 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:23.540 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:23.540 Run-time dependency openssl found: YES 3.1.1 00:02:23.540 Run-time dependency libpcap found: YES 1.10.4 00:02:23.540 Has header "pcap.h" with dependency libpcap: YES 00:02:23.540 Compiler for C supports arguments -Wcast-qual: YES 00:02:23.540 Compiler for C supports arguments -Wdeprecated: YES 00:02:23.540 Compiler for C supports arguments -Wformat: YES 00:02:23.540 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:23.540 Compiler for C supports arguments -Wformat-security: NO 00:02:23.540 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:23.540 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:23.540 Compiler for C supports arguments -Wnested-externs: YES 00:02:23.540 Compiler for C supports arguments -Wold-style-definition: YES 00:02:23.540 Compiler for C supports arguments -Wpointer-arith: YES 00:02:23.540 Compiler for C supports arguments -Wsign-compare: YES 00:02:23.540 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:23.540 Compiler for C supports arguments -Wundef: YES 00:02:23.540 Compiler for C supports arguments -Wwrite-strings: YES 00:02:23.540 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:23.540 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:23.540 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:23.540 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:23.540 Program objdump found: YES (/usr/bin/objdump) 00:02:23.540 Compiler for C supports arguments -mavx512f: YES 00:02:23.540 Checking if "AVX512 checking" compiles: YES 00:02:23.540 Fetching value of define "__SSE4_2__" : 1 00:02:23.540 Fetching value of define "__AES__" : 1 00:02:23.540 Fetching value of define "__AVX__" : 1 00:02:23.540 Fetching value of define "__AVX2__" : 1 00:02:23.540 Fetching value of define "__AVX512BW__" : 1 00:02:23.540 Fetching value of define "__AVX512CD__" : 1 00:02:23.540 Fetching value of define "__AVX512DQ__" : 1 00:02:23.540 Fetching value of define "__AVX512F__" : 1 00:02:23.540 Fetching value of define "__AVX512VL__" : 1 00:02:23.540 Fetching value of define "__PCLMUL__" : 1 00:02:23.540 Fetching value of define "__RDRND__" : 1 00:02:23.540 Fetching value of define "__RDSEED__" : 1 00:02:23.540 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:23.540 Fetching value of define "__znver1__" : (undefined) 00:02:23.540 Fetching value of define "__znver2__" : (undefined) 00:02:23.540 Fetching value of define "__znver3__" : (undefined) 00:02:23.540 Fetching value of define "__znver4__" : (undefined) 00:02:23.540 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:23.540 Message: lib/log: Defining dependency "log" 00:02:23.540 Message: lib/kvargs: Defining dependency "kvargs" 00:02:23.540 Message: lib/telemetry: Defining dependency "telemetry" 00:02:23.540 Checking for function "getentropy" : NO 00:02:23.540 Message: lib/eal: Defining dependency "eal" 00:02:23.540 Message: lib/ring: Defining dependency "ring" 00:02:23.540 Message: lib/rcu: Defining dependency "rcu" 00:02:23.540 Message: lib/mempool: Defining dependency "mempool" 00:02:23.540 Message: lib/mbuf: Defining dependency "mbuf" 00:02:23.540 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:23.540 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.540 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.540 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.540 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:23.540 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:23.540 Compiler for C supports arguments -mpclmul: YES 00:02:23.540 Compiler for C supports arguments -maes: YES 00:02:23.540 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.540 Compiler for C supports arguments -mavx512bw: YES 00:02:23.540 Compiler for C supports arguments -mavx512dq: YES 00:02:23.540 Compiler for C supports arguments -mavx512vl: YES 00:02:23.540 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:23.540 Compiler for C supports arguments -mavx2: YES 00:02:23.540 Compiler for C supports arguments -mavx: YES 00:02:23.540 Message: lib/net: Defining dependency "net" 00:02:23.540 Message: lib/meter: Defining dependency "meter" 00:02:23.540 Message: lib/ethdev: Defining dependency "ethdev" 00:02:23.540 Message: lib/pci: Defining dependency "pci" 00:02:23.540 Message: lib/cmdline: Defining dependency "cmdline" 00:02:23.540 Message: lib/hash: Defining dependency "hash" 00:02:23.540 Message: lib/timer: Defining dependency "timer" 00:02:23.540 Message: lib/compressdev: Defining dependency "compressdev" 00:02:23.540 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:23.540 Message: lib/dmadev: Defining dependency "dmadev" 00:02:23.540 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:23.540 Message: lib/power: Defining dependency "power" 00:02:23.540 Message: lib/reorder: Defining dependency "reorder" 00:02:23.540 Message: lib/security: Defining dependency "security" 00:02:23.540 Has header "linux/userfaultfd.h" : YES 00:02:23.540 Has header "linux/vduse.h" : YES 00:02:23.540 Message: lib/vhost: Defining dependency "vhost" 00:02:23.540 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:23.540 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:23.540 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:23.540 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:23.540 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:23.540 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:23.540 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:23.540 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:23.540 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:23.540 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:23.540 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:23.540 Configuring doxy-api-html.conf using configuration 00:02:23.540 Configuring doxy-api-man.conf using configuration 00:02:23.540 Program mandb found: YES (/usr/bin/mandb) 00:02:23.540 Program sphinx-build found: NO 00:02:23.540 Configuring rte_build_config.h using configuration 00:02:23.540 Message: 00:02:23.540 ================= 00:02:23.540 Applications Enabled 00:02:23.540 ================= 00:02:23.540 00:02:23.540 apps: 00:02:23.540 00:02:23.540 00:02:23.540 Message: 00:02:23.540 ================= 00:02:23.540 Libraries Enabled 00:02:23.540 ================= 00:02:23.540 00:02:23.540 libs: 00:02:23.540 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:23.540 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:23.540 cryptodev, dmadev, power, reorder, security, vhost, 00:02:23.540 00:02:23.540 Message: 00:02:23.540 =============== 00:02:23.540 Drivers Enabled 00:02:23.540 =============== 00:02:23.540 00:02:23.540 common: 00:02:23.540 00:02:23.540 bus: 00:02:23.540 pci, vdev, 00:02:23.540 mempool: 00:02:23.540 ring, 00:02:23.540 dma: 00:02:23.540 00:02:23.540 net: 00:02:23.540 00:02:23.540 crypto: 00:02:23.540 00:02:23.540 compress: 00:02:23.540 00:02:23.540 vdpa: 00:02:23.540 00:02:23.540 00:02:23.540 Message: 00:02:23.540 ================= 00:02:23.540 Content Skipped 00:02:23.540 ================= 00:02:23.540 00:02:23.540 apps: 00:02:23.540 dumpcap: explicitly disabled via build config 00:02:23.540 graph: explicitly disabled via build config 00:02:23.540 pdump: explicitly disabled via build config 00:02:23.540 proc-info: explicitly disabled via build config 00:02:23.540 test-acl: explicitly disabled via build config 00:02:23.540 test-bbdev: explicitly disabled via build config 00:02:23.540 test-cmdline: explicitly disabled via build config 00:02:23.541 test-compress-perf: explicitly disabled via build config 00:02:23.541 test-crypto-perf: explicitly disabled via build config 00:02:23.541 test-dma-perf: explicitly disabled via build config 00:02:23.541 test-eventdev: explicitly disabled via build config 00:02:23.541 test-fib: explicitly disabled via build config 00:02:23.541 test-flow-perf: explicitly disabled via build config 00:02:23.541 test-gpudev: explicitly disabled via build config 00:02:23.541 test-mldev: explicitly disabled via build config 00:02:23.541 test-pipeline: explicitly disabled via build config 00:02:23.541 test-pmd: explicitly disabled via build config 00:02:23.541 test-regex: explicitly disabled via build config 00:02:23.541 test-sad: explicitly disabled via build config 00:02:23.541 test-security-perf: explicitly disabled via build config 00:02:23.541 00:02:23.541 libs: 00:02:23.541 argparse: explicitly disabled via build config 00:02:23.541 metrics: explicitly disabled via build config 00:02:23.541 acl: explicitly disabled via build config 00:02:23.541 bbdev: explicitly disabled via build config 00:02:23.541 bitratestats: explicitly disabled via build config 00:02:23.541 bpf: explicitly disabled via build config 00:02:23.541 cfgfile: explicitly disabled via build config 00:02:23.541 distributor: explicitly disabled via build config 00:02:23.541 efd: explicitly disabled via build config 00:02:23.541 eventdev: explicitly disabled via build config 00:02:23.541 dispatcher: explicitly disabled via build config 00:02:23.541 gpudev: explicitly disabled via build config 00:02:23.541 gro: explicitly disabled via build config 00:02:23.541 gso: explicitly disabled via build config 00:02:23.541 ip_frag: explicitly disabled via build config 00:02:23.541 jobstats: explicitly disabled via build config 00:02:23.541 latencystats: explicitly disabled via build config 00:02:23.541 lpm: explicitly disabled via build config 00:02:23.541 member: explicitly disabled via build config 00:02:23.541 pcapng: explicitly disabled via build config 00:02:23.541 rawdev: explicitly disabled via build config 00:02:23.541 regexdev: explicitly disabled via build config 00:02:23.541 mldev: explicitly disabled via build config 00:02:23.541 rib: explicitly disabled via build config 00:02:23.541 sched: explicitly disabled via build config 00:02:23.541 stack: explicitly disabled via build config 00:02:23.541 ipsec: explicitly disabled via build config 00:02:23.541 pdcp: explicitly disabled via build config 00:02:23.541 fib: explicitly disabled via build config 00:02:23.541 port: explicitly disabled via build config 00:02:23.541 pdump: explicitly disabled via build config 00:02:23.541 table: explicitly disabled via build config 00:02:23.541 pipeline: explicitly disabled via build config 00:02:23.541 graph: explicitly disabled via build config 00:02:23.541 node: explicitly disabled via build config 00:02:23.541 00:02:23.541 drivers: 00:02:23.541 common/cpt: not in enabled drivers build config 00:02:23.541 common/dpaax: not in enabled drivers build config 00:02:23.541 common/iavf: not in enabled drivers build config 00:02:23.541 common/idpf: not in enabled drivers build config 00:02:23.541 common/ionic: not in enabled drivers build config 00:02:23.541 common/mvep: not in enabled drivers build config 00:02:23.541 common/octeontx: not in enabled drivers build config 00:02:23.541 bus/auxiliary: not in enabled drivers build config 00:02:23.541 bus/cdx: not in enabled drivers build config 00:02:23.541 bus/dpaa: not in enabled drivers build config 00:02:23.541 bus/fslmc: not in enabled drivers build config 00:02:23.541 bus/ifpga: not in enabled drivers build config 00:02:23.541 bus/platform: not in enabled drivers build config 00:02:23.541 bus/uacce: not in enabled drivers build config 00:02:23.541 bus/vmbus: not in enabled drivers build config 00:02:23.541 common/cnxk: not in enabled drivers build config 00:02:23.541 common/mlx5: not in enabled drivers build config 00:02:23.541 common/nfp: not in enabled drivers build config 00:02:23.541 common/nitrox: not in enabled drivers build config 00:02:23.541 common/qat: not in enabled drivers build config 00:02:23.541 common/sfc_efx: not in enabled drivers build config 00:02:23.541 mempool/bucket: not in enabled drivers build config 00:02:23.541 mempool/cnxk: not in enabled drivers build config 00:02:23.541 mempool/dpaa: not in enabled drivers build config 00:02:23.541 mempool/dpaa2: not in enabled drivers build config 00:02:23.541 mempool/octeontx: not in enabled drivers build config 00:02:23.541 mempool/stack: not in enabled drivers build config 00:02:23.541 dma/cnxk: not in enabled drivers build config 00:02:23.541 dma/dpaa: not in enabled drivers build config 00:02:23.541 dma/dpaa2: not in enabled drivers build config 00:02:23.541 dma/hisilicon: not in enabled drivers build config 00:02:23.541 dma/idxd: not in enabled drivers build config 00:02:23.541 dma/ioat: not in enabled drivers build config 00:02:23.541 dma/skeleton: not in enabled drivers build config 00:02:23.541 net/af_packet: not in enabled drivers build config 00:02:23.541 net/af_xdp: not in enabled drivers build config 00:02:23.541 net/ark: not in enabled drivers build config 00:02:23.541 net/atlantic: not in enabled drivers build config 00:02:23.541 net/avp: not in enabled drivers build config 00:02:23.541 net/axgbe: not in enabled drivers build config 00:02:23.541 net/bnx2x: not in enabled drivers build config 00:02:23.541 net/bnxt: not in enabled drivers build config 00:02:23.541 net/bonding: not in enabled drivers build config 00:02:23.541 net/cnxk: not in enabled drivers build config 00:02:23.541 net/cpfl: not in enabled drivers build config 00:02:23.541 net/cxgbe: not in enabled drivers build config 00:02:23.541 net/dpaa: not in enabled drivers build config 00:02:23.541 net/dpaa2: not in enabled drivers build config 00:02:23.541 net/e1000: not in enabled drivers build config 00:02:23.541 net/ena: not in enabled drivers build config 00:02:23.541 net/enetc: not in enabled drivers build config 00:02:23.541 net/enetfec: not in enabled drivers build config 00:02:23.541 net/enic: not in enabled drivers build config 00:02:23.541 net/failsafe: not in enabled drivers build config 00:02:23.541 net/fm10k: not in enabled drivers build config 00:02:23.541 net/gve: not in enabled drivers build config 00:02:23.541 net/hinic: not in enabled drivers build config 00:02:23.541 net/hns3: not in enabled drivers build config 00:02:23.541 net/i40e: not in enabled drivers build config 00:02:23.541 net/iavf: not in enabled drivers build config 00:02:23.541 net/ice: not in enabled drivers build config 00:02:23.541 net/idpf: not in enabled drivers build config 00:02:23.541 net/igc: not in enabled drivers build config 00:02:23.541 net/ionic: not in enabled drivers build config 00:02:23.541 net/ipn3ke: not in enabled drivers build config 00:02:23.541 net/ixgbe: not in enabled drivers build config 00:02:23.541 net/mana: not in enabled drivers build config 00:02:23.541 net/memif: not in enabled drivers build config 00:02:23.541 net/mlx4: not in enabled drivers build config 00:02:23.541 net/mlx5: not in enabled drivers build config 00:02:23.541 net/mvneta: not in enabled drivers build config 00:02:23.541 net/mvpp2: not in enabled drivers build config 00:02:23.541 net/netvsc: not in enabled drivers build config 00:02:23.541 net/nfb: not in enabled drivers build config 00:02:23.541 net/nfp: not in enabled drivers build config 00:02:23.541 net/ngbe: not in enabled drivers build config 00:02:23.541 net/null: not in enabled drivers build config 00:02:23.541 net/octeontx: not in enabled drivers build config 00:02:23.541 net/octeon_ep: not in enabled drivers build config 00:02:23.541 net/pcap: not in enabled drivers build config 00:02:23.541 net/pfe: not in enabled drivers build config 00:02:23.541 net/qede: not in enabled drivers build config 00:02:23.541 net/ring: not in enabled drivers build config 00:02:23.541 net/sfc: not in enabled drivers build config 00:02:23.541 net/softnic: not in enabled drivers build config 00:02:23.541 net/tap: not in enabled drivers build config 00:02:23.541 net/thunderx: not in enabled drivers build config 00:02:23.541 net/txgbe: not in enabled drivers build config 00:02:23.541 net/vdev_netvsc: not in enabled drivers build config 00:02:23.541 net/vhost: not in enabled drivers build config 00:02:23.541 net/virtio: not in enabled drivers build config 00:02:23.541 net/vmxnet3: not in enabled drivers build config 00:02:23.541 raw/*: missing internal dependency, "rawdev" 00:02:23.541 crypto/armv8: not in enabled drivers build config 00:02:23.541 crypto/bcmfs: not in enabled drivers build config 00:02:23.541 crypto/caam_jr: not in enabled drivers build config 00:02:23.541 crypto/ccp: not in enabled drivers build config 00:02:23.541 crypto/cnxk: not in enabled drivers build config 00:02:23.541 crypto/dpaa_sec: not in enabled drivers build config 00:02:23.541 crypto/dpaa2_sec: not in enabled drivers build config 00:02:23.541 crypto/ipsec_mb: not in enabled drivers build config 00:02:23.542 crypto/mlx5: not in enabled drivers build config 00:02:23.542 crypto/mvsam: not in enabled drivers build config 00:02:23.542 crypto/nitrox: not in enabled drivers build config 00:02:23.542 crypto/null: not in enabled drivers build config 00:02:23.542 crypto/octeontx: not in enabled drivers build config 00:02:23.542 crypto/openssl: not in enabled drivers build config 00:02:23.542 crypto/scheduler: not in enabled drivers build config 00:02:23.542 crypto/uadk: not in enabled drivers build config 00:02:23.542 crypto/virtio: not in enabled drivers build config 00:02:23.542 compress/isal: not in enabled drivers build config 00:02:23.542 compress/mlx5: not in enabled drivers build config 00:02:23.542 compress/nitrox: not in enabled drivers build config 00:02:23.542 compress/octeontx: not in enabled drivers build config 00:02:23.542 compress/zlib: not in enabled drivers build config 00:02:23.542 regex/*: missing internal dependency, "regexdev" 00:02:23.542 ml/*: missing internal dependency, "mldev" 00:02:23.542 vdpa/ifc: not in enabled drivers build config 00:02:23.542 vdpa/mlx5: not in enabled drivers build config 00:02:23.542 vdpa/nfp: not in enabled drivers build config 00:02:23.542 vdpa/sfc: not in enabled drivers build config 00:02:23.542 event/*: missing internal dependency, "eventdev" 00:02:23.542 baseband/*: missing internal dependency, "bbdev" 00:02:23.542 gpu/*: missing internal dependency, "gpudev" 00:02:23.542 00:02:23.542 00:02:23.542 Build targets in project: 85 00:02:23.542 00:02:23.542 DPDK 24.03.0 00:02:23.542 00:02:23.542 User defined options 00:02:23.542 buildtype : debug 00:02:23.542 default_library : shared 00:02:23.542 libdir : lib 00:02:23.542 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:23.542 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:23.542 c_link_args : 00:02:23.542 cpu_instruction_set: native 00:02:23.542 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:23.542 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:23.542 enable_docs : false 00:02:23.542 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:23.542 enable_kmods : false 00:02:23.542 max_lcores : 128 00:02:23.542 tests : false 00:02:23.542 00:02:23.542 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.812 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:23.812 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:24.073 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:24.073 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:24.073 [4/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:24.073 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:24.073 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:24.073 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:24.073 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:24.073 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:24.073 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:24.073 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:24.073 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:24.073 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:24.073 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:24.073 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:24.073 [16/268] Linking static target lib/librte_kvargs.a 00:02:24.073 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:24.073 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:24.073 [19/268] Linking static target lib/librte_log.a 00:02:24.335 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:24.335 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:24.335 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:24.335 [23/268] Linking static target lib/librte_pci.a 00:02:24.335 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:24.335 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:24.335 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:24.335 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:24.335 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:24.335 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:24.335 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:24.335 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:24.335 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:24.335 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:24.335 [34/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:24.335 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:24.596 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:24.596 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:24.596 [38/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:24.596 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:24.596 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:24.596 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:24.596 [42/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:24.596 [43/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:24.596 [44/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:24.596 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:24.596 [46/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:24.596 [47/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:24.596 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:24.596 [49/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:24.596 [50/268] Linking static target lib/librte_meter.a 00:02:24.596 [51/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:24.596 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:24.596 [53/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:24.596 [54/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:24.596 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:24.596 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:24.596 [57/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:24.596 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:24.596 [59/268] Linking static target lib/librte_ring.a 00:02:24.596 [60/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:24.596 [61/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:24.596 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:24.596 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:24.596 [64/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.596 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:24.596 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:24.596 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:24.596 [68/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:24.596 [69/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:24.596 [70/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:24.596 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:24.596 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:24.596 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:24.596 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:24.596 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:24.596 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:24.596 [77/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:24.596 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:24.596 [79/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:24.596 [80/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:24.596 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:24.596 [82/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:24.596 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:24.596 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:24.596 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:24.596 [86/268] Linking static target lib/librte_telemetry.a 00:02:24.596 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:24.596 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:24.596 [89/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:24.596 [90/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:24.596 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:24.596 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:24.596 [93/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:24.596 [94/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:24.596 [95/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:24.596 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:24.597 [97/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:24.597 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:24.597 [99/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.597 [100/268] Linking static target lib/librte_net.a 00:02:24.597 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:24.597 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:24.597 [103/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:24.597 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:24.597 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:24.597 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:24.597 [107/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:24.597 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:24.597 [109/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:24.597 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.597 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:24.597 [112/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:24.855 [113/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.855 [114/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:24.855 [115/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:24.855 [116/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:24.855 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:24.855 [118/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:24.855 [119/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:24.855 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:24.855 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:24.855 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:24.855 [123/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:24.855 [124/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:24.855 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:24.855 [126/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:24.855 [127/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:24.855 [128/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.855 [129/268] Linking static target lib/librte_mempool.a 00:02:24.855 [130/268] Linking static target lib/librte_eal.a 00:02:24.855 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:24.855 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:24.855 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:24.855 [134/268] Linking static target lib/librte_cmdline.a 00:02:24.855 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:24.855 [136/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:24.855 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.855 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.855 [139/268] Linking static target lib/librte_rcu.a 00:02:24.855 [140/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:24.855 [141/268] Linking target lib/librte_log.so.24.1 00:02:24.855 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:25.113 [143/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:25.113 [144/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.113 [145/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:25.113 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:25.113 [147/268] Linking static target lib/librte_mbuf.a 00:02:25.113 [148/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:25.113 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:25.113 [150/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:25.113 [151/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:25.113 [152/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:25.113 [153/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:25.113 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:25.113 [155/268] Linking static target lib/librte_dmadev.a 00:02:25.113 [156/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:25.113 [157/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:25.113 [158/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:25.113 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:25.113 [160/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:25.113 [161/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:25.113 [162/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:25.113 [163/268] Linking static target lib/librte_timer.a 00:02:25.113 [164/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.113 [165/268] Linking static target lib/librte_reorder.a 00:02:25.113 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:25.113 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:25.113 [168/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:25.113 [169/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:25.113 [170/268] Linking static target lib/librte_compressdev.a 00:02:25.113 [171/268] Linking target lib/librte_kvargs.so.24.1 00:02:25.113 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:25.113 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:25.113 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:25.113 [175/268] Linking static target lib/librte_power.a 00:02:25.113 [176/268] Linking target lib/librte_telemetry.so.24.1 00:02:25.113 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:25.113 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:25.113 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:25.113 [180/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:25.113 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:25.113 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:25.113 [183/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:25.113 [184/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:25.113 [185/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:25.113 [186/268] Linking static target lib/librte_hash.a 00:02:25.372 [187/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.372 [188/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:25.372 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:25.372 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:25.372 [191/268] Linking static target lib/librte_security.a 00:02:25.372 [192/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:25.372 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:25.372 [194/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:25.372 [195/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:25.372 [196/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:25.372 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:25.372 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:25.372 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.372 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:25.372 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.372 [202/268] Linking static target drivers/librte_bus_vdev.a 00:02:25.631 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:25.631 [204/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.631 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.631 [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:25.631 [207/268] Linking static target drivers/librte_bus_pci.a 00:02:25.631 [208/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.631 [209/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.631 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.631 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.631 [212/268] Linking static target drivers/librte_mempool_ring.a 00:02:25.631 [213/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.631 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:25.631 [215/268] Linking static target lib/librte_cryptodev.a 00:02:25.631 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:25.631 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.631 [218/268] Linking static target lib/librte_ethdev.a 00:02:25.631 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.889 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.889 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.889 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.148 [223/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.148 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.148 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:26.148 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.406 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.357 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:27.357 [229/268] Linking static target lib/librte_vhost.a 00:02:27.616 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.990 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.258 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.824 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.082 [234/268] Linking target lib/librte_eal.so.24.1 00:02:35.082 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:35.082 [236/268] Linking target lib/librte_ring.so.24.1 00:02:35.082 [237/268] Linking target lib/librte_timer.so.24.1 00:02:35.082 [238/268] Linking target lib/librte_meter.so.24.1 00:02:35.082 [239/268] Linking target lib/librte_pci.so.24.1 00:02:35.082 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:35.082 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:35.345 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:35.345 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:35.345 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:35.345 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:35.345 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:35.345 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:35.345 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:35.345 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:35.345 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:35.645 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:35.645 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:35.645 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:35.645 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:35.645 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:35.645 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:35.645 [257/268] Linking target lib/librte_net.so.24.1 00:02:35.645 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:35.967 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:35.967 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:35.967 [261/268] Linking target lib/librte_hash.so.24.1 00:02:35.967 [262/268] Linking target lib/librte_security.so.24.1 00:02:35.967 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:35.967 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:35.967 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:35.967 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:36.281 [267/268] Linking target lib/librte_power.so.24.1 00:02:36.281 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:36.281 INFO: autodetecting backend as ninja 00:02:36.281 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:46.255 CC lib/ut_mock/mock.o 00:02:46.255 CC lib/log/log.o 00:02:46.255 CC lib/ut/ut.o 00:02:46.255 CC lib/log/log_flags.o 00:02:46.255 CC lib/log/log_deprecated.o 00:02:46.255 LIB libspdk_log.a 00:02:46.255 LIB libspdk_ut_mock.a 00:02:46.255 LIB libspdk_ut.a 00:02:46.514 SO libspdk_ut_mock.so.6.0 00:02:46.514 SO libspdk_ut.so.2.0 00:02:46.514 SO libspdk_log.so.7.1 00:02:46.514 SYMLINK libspdk_ut_mock.so 00:02:46.514 SYMLINK libspdk_ut.so 00:02:46.514 SYMLINK libspdk_log.so 00:02:46.773 CC lib/dma/dma.o 00:02:46.773 CXX lib/trace_parser/trace.o 00:02:46.773 CC lib/util/base64.o 00:02:46.773 CC lib/util/bit_array.o 00:02:46.773 CC lib/util/cpuset.o 00:02:46.773 CC lib/ioat/ioat.o 00:02:46.773 CC lib/util/crc16.o 00:02:46.773 CC lib/util/crc32.o 00:02:46.773 CC lib/util/crc32c.o 00:02:46.773 CC lib/util/crc32_ieee.o 00:02:46.773 CC lib/util/crc64.o 00:02:46.773 CC lib/util/dif.o 00:02:46.773 CC lib/util/fd.o 00:02:46.773 CC lib/util/fd_group.o 00:02:46.773 CC lib/util/file.o 00:02:46.773 CC lib/util/hexlify.o 00:02:46.773 CC lib/util/iov.o 00:02:46.773 CC lib/util/math.o 00:02:46.773 CC lib/util/net.o 00:02:46.773 CC lib/util/pipe.o 00:02:46.773 CC lib/util/strerror_tls.o 00:02:46.773 CC lib/util/string.o 00:02:46.773 CC lib/util/uuid.o 00:02:46.773 CC lib/util/xor.o 00:02:46.773 CC lib/util/zipf.o 00:02:46.773 CC lib/util/md5.o 00:02:47.031 CC lib/vfio_user/host/vfio_user_pci.o 00:02:47.031 CC lib/vfio_user/host/vfio_user.o 00:02:47.031 LIB libspdk_dma.a 00:02:47.031 SO libspdk_dma.so.5.0 00:02:47.031 LIB libspdk_ioat.a 00:02:47.031 SYMLINK libspdk_dma.so 00:02:47.031 SO libspdk_ioat.so.7.0 00:02:47.289 SYMLINK libspdk_ioat.so 00:02:47.289 LIB libspdk_vfio_user.a 00:02:47.289 SO libspdk_vfio_user.so.5.0 00:02:47.289 SYMLINK libspdk_vfio_user.so 00:02:47.289 LIB libspdk_util.a 00:02:47.289 SO libspdk_util.so.10.1 00:02:47.547 SYMLINK libspdk_util.so 00:02:47.547 LIB libspdk_trace_parser.a 00:02:47.547 SO libspdk_trace_parser.so.6.0 00:02:47.547 SYMLINK libspdk_trace_parser.so 00:02:47.809 CC lib/idxd/idxd.o 00:02:47.809 CC lib/rdma_utils/rdma_utils.o 00:02:47.809 CC lib/idxd/idxd_user.o 00:02:47.809 CC lib/conf/conf.o 00:02:47.809 CC lib/json/json_parse.o 00:02:47.809 CC lib/idxd/idxd_kernel.o 00:02:47.809 CC lib/vmd/vmd.o 00:02:47.809 CC lib/env_dpdk/env.o 00:02:47.809 CC lib/json/json_util.o 00:02:47.809 CC lib/vmd/led.o 00:02:47.809 CC lib/json/json_write.o 00:02:47.809 CC lib/env_dpdk/memory.o 00:02:47.809 CC lib/env_dpdk/pci.o 00:02:47.809 CC lib/env_dpdk/init.o 00:02:47.809 CC lib/env_dpdk/threads.o 00:02:47.809 CC lib/env_dpdk/pci_ioat.o 00:02:47.809 CC lib/env_dpdk/pci_virtio.o 00:02:47.809 CC lib/env_dpdk/pci_vmd.o 00:02:47.809 CC lib/env_dpdk/pci_idxd.o 00:02:47.809 CC lib/env_dpdk/pci_event.o 00:02:47.809 CC lib/env_dpdk/sigbus_handler.o 00:02:47.809 CC lib/env_dpdk/pci_dpdk.o 00:02:47.809 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:47.809 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:48.067 LIB libspdk_conf.a 00:02:48.067 LIB libspdk_json.a 00:02:48.067 SO libspdk_conf.so.6.0 00:02:48.067 LIB libspdk_rdma_utils.a 00:02:48.067 SO libspdk_rdma_utils.so.1.0 00:02:48.067 SO libspdk_json.so.6.0 00:02:48.326 SYMLINK libspdk_conf.so 00:02:48.326 SYMLINK libspdk_rdma_utils.so 00:02:48.326 SYMLINK libspdk_json.so 00:02:48.326 LIB libspdk_vmd.a 00:02:48.326 LIB libspdk_idxd.a 00:02:48.326 SO libspdk_idxd.so.12.1 00:02:48.326 SO libspdk_vmd.so.6.0 00:02:48.584 SYMLINK libspdk_idxd.so 00:02:48.584 SYMLINK libspdk_vmd.so 00:02:48.584 CC lib/rdma_provider/common.o 00:02:48.584 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:48.584 CC lib/jsonrpc/jsonrpc_server.o 00:02:48.584 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:48.584 CC lib/jsonrpc/jsonrpc_client.o 00:02:48.585 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:48.843 LIB libspdk_rdma_provider.a 00:02:48.843 SO libspdk_rdma_provider.so.7.0 00:02:48.843 LIB libspdk_jsonrpc.a 00:02:48.843 SYMLINK libspdk_rdma_provider.so 00:02:48.843 SO libspdk_jsonrpc.so.6.0 00:02:48.843 LIB libspdk_env_dpdk.a 00:02:48.843 SYMLINK libspdk_jsonrpc.so 00:02:48.843 SO libspdk_env_dpdk.so.15.1 00:02:49.102 SYMLINK libspdk_env_dpdk.so 00:02:49.361 CC lib/rpc/rpc.o 00:02:49.361 LIB libspdk_rpc.a 00:02:49.361 SO libspdk_rpc.so.6.0 00:02:49.621 SYMLINK libspdk_rpc.so 00:02:49.880 CC lib/trace/trace.o 00:02:49.880 CC lib/trace/trace_flags.o 00:02:49.880 CC lib/trace/trace_rpc.o 00:02:49.880 CC lib/notify/notify.o 00:02:49.880 CC lib/keyring/keyring.o 00:02:49.880 CC lib/notify/notify_rpc.o 00:02:49.880 CC lib/keyring/keyring_rpc.o 00:02:50.138 LIB libspdk_notify.a 00:02:50.139 SO libspdk_notify.so.6.0 00:02:50.139 LIB libspdk_keyring.a 00:02:50.139 LIB libspdk_trace.a 00:02:50.139 SO libspdk_keyring.so.2.0 00:02:50.139 SYMLINK libspdk_notify.so 00:02:50.139 SO libspdk_trace.so.11.0 00:02:50.139 SYMLINK libspdk_keyring.so 00:02:50.139 SYMLINK libspdk_trace.so 00:02:50.705 CC lib/sock/sock.o 00:02:50.705 CC lib/sock/sock_rpc.o 00:02:50.705 CC lib/thread/thread.o 00:02:50.705 CC lib/thread/iobuf.o 00:02:50.964 LIB libspdk_sock.a 00:02:50.964 SO libspdk_sock.so.10.0 00:02:50.964 SYMLINK libspdk_sock.so 00:02:51.221 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:51.221 CC lib/nvme/nvme_ctrlr.o 00:02:51.221 CC lib/nvme/nvme_fabric.o 00:02:51.221 CC lib/nvme/nvme_ns_cmd.o 00:02:51.221 CC lib/nvme/nvme_ns.o 00:02:51.221 CC lib/nvme/nvme_pcie_common.o 00:02:51.221 CC lib/nvme/nvme_pcie.o 00:02:51.221 CC lib/nvme/nvme_qpair.o 00:02:51.221 CC lib/nvme/nvme.o 00:02:51.221 CC lib/nvme/nvme_quirks.o 00:02:51.221 CC lib/nvme/nvme_transport.o 00:02:51.221 CC lib/nvme/nvme_discovery.o 00:02:51.221 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:51.221 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:51.221 CC lib/nvme/nvme_tcp.o 00:02:51.221 CC lib/nvme/nvme_opal.o 00:02:51.221 CC lib/nvme/nvme_poll_group.o 00:02:51.221 CC lib/nvme/nvme_io_msg.o 00:02:51.221 CC lib/nvme/nvme_zns.o 00:02:51.221 CC lib/nvme/nvme_stubs.o 00:02:51.221 CC lib/nvme/nvme_auth.o 00:02:51.221 CC lib/nvme/nvme_cuse.o 00:02:51.221 CC lib/nvme/nvme_vfio_user.o 00:02:51.221 CC lib/nvme/nvme_rdma.o 00:02:51.791 LIB libspdk_thread.a 00:02:51.791 SO libspdk_thread.so.11.0 00:02:51.791 SYMLINK libspdk_thread.so 00:02:52.049 CC lib/blob/blobstore.o 00:02:52.049 CC lib/blob/request.o 00:02:52.049 CC lib/blob/zeroes.o 00:02:52.049 CC lib/blob/blob_bs_dev.o 00:02:52.049 CC lib/accel/accel.o 00:02:52.049 CC lib/accel/accel_sw.o 00:02:52.049 CC lib/accel/accel_rpc.o 00:02:52.049 CC lib/init/json_config.o 00:02:52.049 CC lib/init/subsystem.o 00:02:52.049 CC lib/init/subsystem_rpc.o 00:02:52.049 CC lib/init/rpc.o 00:02:52.049 CC lib/virtio/virtio.o 00:02:52.049 CC lib/fsdev/fsdev.o 00:02:52.049 CC lib/virtio/virtio_vhost_user.o 00:02:52.049 CC lib/fsdev/fsdev_io.o 00:02:52.049 CC lib/virtio/virtio_vfio_user.o 00:02:52.049 CC lib/fsdev/fsdev_rpc.o 00:02:52.049 CC lib/virtio/virtio_pci.o 00:02:52.049 CC lib/vfu_tgt/tgt_endpoint.o 00:02:52.049 CC lib/vfu_tgt/tgt_rpc.o 00:02:52.308 LIB libspdk_init.a 00:02:52.308 SO libspdk_init.so.6.0 00:02:52.308 LIB libspdk_virtio.a 00:02:52.308 LIB libspdk_vfu_tgt.a 00:02:52.308 SO libspdk_virtio.so.7.0 00:02:52.308 SYMLINK libspdk_init.so 00:02:52.308 SO libspdk_vfu_tgt.so.3.0 00:02:52.566 SYMLINK libspdk_virtio.so 00:02:52.566 SYMLINK libspdk_vfu_tgt.so 00:02:52.566 LIB libspdk_fsdev.a 00:02:52.566 SO libspdk_fsdev.so.2.0 00:02:52.824 SYMLINK libspdk_fsdev.so 00:02:52.824 CC lib/event/app.o 00:02:52.824 CC lib/event/reactor.o 00:02:52.824 CC lib/event/log_rpc.o 00:02:52.824 CC lib/event/app_rpc.o 00:02:52.824 CC lib/event/scheduler_static.o 00:02:52.824 LIB libspdk_accel.a 00:02:52.824 SO libspdk_accel.so.16.0 00:02:53.083 SYMLINK libspdk_accel.so 00:02:53.083 LIB libspdk_nvme.a 00:02:53.083 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:53.083 LIB libspdk_event.a 00:02:53.083 SO libspdk_event.so.14.0 00:02:53.083 SO libspdk_nvme.so.15.0 00:02:53.083 SYMLINK libspdk_event.so 00:02:53.341 SYMLINK libspdk_nvme.so 00:02:53.341 CC lib/bdev/bdev.o 00:02:53.341 CC lib/bdev/bdev_rpc.o 00:02:53.341 CC lib/bdev/bdev_zone.o 00:02:53.341 CC lib/bdev/part.o 00:02:53.341 CC lib/bdev/scsi_nvme.o 00:02:53.600 LIB libspdk_fuse_dispatcher.a 00:02:53.600 SO libspdk_fuse_dispatcher.so.1.0 00:02:53.600 SYMLINK libspdk_fuse_dispatcher.so 00:02:54.166 LIB libspdk_blob.a 00:02:54.166 SO libspdk_blob.so.12.0 00:02:54.425 SYMLINK libspdk_blob.so 00:02:54.683 CC lib/lvol/lvol.o 00:02:54.683 CC lib/blobfs/blobfs.o 00:02:54.683 CC lib/blobfs/tree.o 00:02:55.250 LIB libspdk_bdev.a 00:02:55.250 SO libspdk_bdev.so.17.0 00:02:55.250 LIB libspdk_blobfs.a 00:02:55.250 SYMLINK libspdk_bdev.so 00:02:55.250 SO libspdk_blobfs.so.11.0 00:02:55.250 LIB libspdk_lvol.a 00:02:55.508 SO libspdk_lvol.so.11.0 00:02:55.508 SYMLINK libspdk_blobfs.so 00:02:55.508 SYMLINK libspdk_lvol.so 00:02:55.769 CC lib/nvmf/ctrlr.o 00:02:55.769 CC lib/nvmf/ctrlr_discovery.o 00:02:55.769 CC lib/nvmf/ctrlr_bdev.o 00:02:55.769 CC lib/nvmf/subsystem.o 00:02:55.769 CC lib/nvmf/nvmf.o 00:02:55.769 CC lib/nvmf/nvmf_rpc.o 00:02:55.769 CC lib/nvmf/transport.o 00:02:55.769 CC lib/nvmf/tcp.o 00:02:55.769 CC lib/nvmf/stubs.o 00:02:55.769 CC lib/nvmf/mdns_server.o 00:02:55.769 CC lib/nvmf/vfio_user.o 00:02:55.769 CC lib/nvmf/rdma.o 00:02:55.769 CC lib/nvmf/auth.o 00:02:55.769 CC lib/nbd/nbd.o 00:02:55.769 CC lib/nbd/nbd_rpc.o 00:02:55.769 CC lib/ftl/ftl_core.o 00:02:55.769 CC lib/scsi/dev.o 00:02:55.769 CC lib/ftl/ftl_init.o 00:02:55.769 CC lib/ublk/ublk.o 00:02:55.769 CC lib/scsi/lun.o 00:02:55.769 CC lib/ftl/ftl_layout.o 00:02:55.769 CC lib/ublk/ublk_rpc.o 00:02:55.769 CC lib/scsi/port.o 00:02:55.769 CC lib/ftl/ftl_debug.o 00:02:55.769 CC lib/scsi/scsi.o 00:02:55.769 CC lib/ftl/ftl_io.o 00:02:55.769 CC lib/scsi/scsi_bdev.o 00:02:55.769 CC lib/ftl/ftl_sb.o 00:02:55.769 CC lib/ftl/ftl_l2p.o 00:02:55.769 CC lib/scsi/scsi_pr.o 00:02:55.769 CC lib/scsi/scsi_rpc.o 00:02:55.769 CC lib/ftl/ftl_l2p_flat.o 00:02:55.770 CC lib/scsi/task.o 00:02:55.770 CC lib/ftl/ftl_nv_cache.o 00:02:55.770 CC lib/ftl/ftl_band.o 00:02:55.770 CC lib/ftl/ftl_band_ops.o 00:02:55.770 CC lib/ftl/ftl_writer.o 00:02:55.770 CC lib/ftl/ftl_rq.o 00:02:55.770 CC lib/ftl/ftl_reloc.o 00:02:55.770 CC lib/ftl/ftl_l2p_cache.o 00:02:55.770 CC lib/ftl/ftl_p2l.o 00:02:55.770 CC lib/ftl/ftl_p2l_log.o 00:02:55.770 CC lib/ftl/mngt/ftl_mngt.o 00:02:55.770 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:55.770 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:55.770 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:55.770 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:55.770 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:55.770 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:55.770 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:55.770 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:55.770 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:55.770 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:55.770 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:55.770 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:55.770 CC lib/ftl/utils/ftl_md.o 00:02:55.770 CC lib/ftl/utils/ftl_conf.o 00:02:55.770 CC lib/ftl/utils/ftl_mempool.o 00:02:55.770 CC lib/ftl/utils/ftl_bitmap.o 00:02:55.770 CC lib/ftl/utils/ftl_property.o 00:02:55.770 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:55.770 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:55.770 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:55.770 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:55.770 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:55.770 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:55.770 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:55.770 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:55.770 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:55.770 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:55.770 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:55.770 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:55.770 CC lib/ftl/base/ftl_base_dev.o 00:02:55.770 CC lib/ftl/base/ftl_base_bdev.o 00:02:55.770 CC lib/ftl/ftl_trace.o 00:02:55.770 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:56.686 LIB libspdk_nbd.a 00:02:56.686 SO libspdk_nbd.so.7.0 00:02:56.686 LIB libspdk_scsi.a 00:02:56.686 LIB libspdk_ublk.a 00:02:56.686 SO libspdk_ublk.so.3.0 00:02:56.686 SYMLINK libspdk_nbd.so 00:02:56.686 SO libspdk_scsi.so.9.0 00:02:56.686 SYMLINK libspdk_ublk.so 00:02:56.686 SYMLINK libspdk_scsi.so 00:02:56.686 LIB libspdk_ftl.a 00:02:56.944 CC lib/vhost/vhost.o 00:02:56.944 CC lib/vhost/vhost_rpc.o 00:02:56.944 CC lib/vhost/vhost_scsi.o 00:02:56.944 CC lib/vhost/vhost_blk.o 00:02:56.944 CC lib/vhost/rte_vhost_user.o 00:02:56.944 CC lib/iscsi/conn.o 00:02:56.944 CC lib/iscsi/init_grp.o 00:02:56.944 CC lib/iscsi/iscsi.o 00:02:56.944 CC lib/iscsi/param.o 00:02:56.944 CC lib/iscsi/portal_grp.o 00:02:56.944 CC lib/iscsi/tgt_node.o 00:02:56.944 CC lib/iscsi/iscsi_subsystem.o 00:02:56.944 CC lib/iscsi/iscsi_rpc.o 00:02:56.944 CC lib/iscsi/task.o 00:02:56.944 SO libspdk_ftl.so.9.0 00:02:57.203 SYMLINK libspdk_ftl.so 00:02:57.461 LIB libspdk_nvmf.a 00:02:57.721 SO libspdk_nvmf.so.20.0 00:02:57.721 LIB libspdk_vhost.a 00:02:57.721 SO libspdk_vhost.so.8.0 00:02:57.721 SYMLINK libspdk_nvmf.so 00:02:57.721 SYMLINK libspdk_vhost.so 00:02:57.980 LIB libspdk_iscsi.a 00:02:57.980 SO libspdk_iscsi.so.8.0 00:02:57.980 SYMLINK libspdk_iscsi.so 00:02:58.547 CC module/env_dpdk/env_dpdk_rpc.o 00:02:58.547 CC module/vfu_device/vfu_virtio.o 00:02:58.547 CC module/vfu_device/vfu_virtio_scsi.o 00:02:58.547 CC module/vfu_device/vfu_virtio_blk.o 00:02:58.547 CC module/vfu_device/vfu_virtio_rpc.o 00:02:58.547 CC module/vfu_device/vfu_virtio_fs.o 00:02:58.806 LIB libspdk_env_dpdk_rpc.a 00:02:58.806 CC module/keyring/file/keyring.o 00:02:58.806 CC module/scheduler/gscheduler/gscheduler.o 00:02:58.806 CC module/keyring/file/keyring_rpc.o 00:02:58.806 CC module/accel/dsa/accel_dsa.o 00:02:58.806 CC module/accel/dsa/accel_dsa_rpc.o 00:02:58.806 CC module/accel/ioat/accel_ioat.o 00:02:58.806 CC module/accel/ioat/accel_ioat_rpc.o 00:02:58.806 CC module/accel/error/accel_error.o 00:02:58.806 CC module/accel/error/accel_error_rpc.o 00:02:58.806 SO libspdk_env_dpdk_rpc.so.6.0 00:02:58.806 CC module/sock/posix/posix.o 00:02:58.806 CC module/keyring/linux/keyring.o 00:02:58.806 CC module/accel/iaa/accel_iaa.o 00:02:58.806 CC module/keyring/linux/keyring_rpc.o 00:02:58.806 CC module/accel/iaa/accel_iaa_rpc.o 00:02:58.806 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:58.806 CC module/blob/bdev/blob_bdev.o 00:02:58.806 CC module/fsdev/aio/fsdev_aio.o 00:02:58.806 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:58.806 CC module/fsdev/aio/linux_aio_mgr.o 00:02:58.806 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:58.806 SYMLINK libspdk_env_dpdk_rpc.so 00:02:59.064 LIB libspdk_keyring_file.a 00:02:59.064 LIB libspdk_keyring_linux.a 00:02:59.064 LIB libspdk_scheduler_gscheduler.a 00:02:59.064 LIB libspdk_scheduler_dpdk_governor.a 00:02:59.064 SO libspdk_keyring_file.so.2.0 00:02:59.064 SO libspdk_keyring_linux.so.1.0 00:02:59.064 LIB libspdk_accel_ioat.a 00:02:59.064 SO libspdk_scheduler_gscheduler.so.4.0 00:02:59.064 LIB libspdk_accel_iaa.a 00:02:59.064 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:59.064 LIB libspdk_accel_error.a 00:02:59.064 LIB libspdk_scheduler_dynamic.a 00:02:59.064 SO libspdk_accel_ioat.so.6.0 00:02:59.064 SYMLINK libspdk_keyring_file.so 00:02:59.064 SO libspdk_accel_iaa.so.3.0 00:02:59.064 SYMLINK libspdk_keyring_linux.so 00:02:59.064 SYMLINK libspdk_scheduler_gscheduler.so 00:02:59.064 SO libspdk_accel_error.so.2.0 00:02:59.064 SO libspdk_scheduler_dynamic.so.4.0 00:02:59.064 LIB libspdk_accel_dsa.a 00:02:59.064 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:59.064 LIB libspdk_blob_bdev.a 00:02:59.064 SYMLINK libspdk_accel_ioat.so 00:02:59.064 SO libspdk_accel_dsa.so.5.0 00:02:59.064 SYMLINK libspdk_accel_error.so 00:02:59.064 SYMLINK libspdk_accel_iaa.so 00:02:59.064 SYMLINK libspdk_scheduler_dynamic.so 00:02:59.064 SO libspdk_blob_bdev.so.12.0 00:02:59.064 LIB libspdk_vfu_device.a 00:02:59.064 SYMLINK libspdk_accel_dsa.so 00:02:59.323 SYMLINK libspdk_blob_bdev.so 00:02:59.323 SO libspdk_vfu_device.so.3.0 00:02:59.323 SYMLINK libspdk_vfu_device.so 00:02:59.323 LIB libspdk_fsdev_aio.a 00:02:59.323 SO libspdk_fsdev_aio.so.1.0 00:02:59.323 LIB libspdk_sock_posix.a 00:02:59.604 SO libspdk_sock_posix.so.6.0 00:02:59.604 SYMLINK libspdk_fsdev_aio.so 00:02:59.604 SYMLINK libspdk_sock_posix.so 00:02:59.604 CC module/bdev/null/bdev_null.o 00:02:59.604 CC module/bdev/null/bdev_null_rpc.o 00:02:59.604 CC module/blobfs/bdev/blobfs_bdev.o 00:02:59.604 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:59.604 CC module/bdev/gpt/gpt.o 00:02:59.604 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:59.604 CC module/bdev/delay/vbdev_delay.o 00:02:59.604 CC module/bdev/gpt/vbdev_gpt.o 00:02:59.604 CC module/bdev/ftl/bdev_ftl.o 00:02:59.604 CC module/bdev/passthru/vbdev_passthru.o 00:02:59.604 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:59.604 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:59.604 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:59.604 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:59.604 CC module/bdev/malloc/bdev_malloc.o 00:02:59.604 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:59.604 CC module/bdev/raid/bdev_raid.o 00:02:59.604 CC module/bdev/nvme/bdev_nvme.o 00:02:59.604 CC module/bdev/raid/bdev_raid_rpc.o 00:02:59.604 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:59.604 CC module/bdev/nvme/nvme_rpc.o 00:02:59.604 CC module/bdev/raid/bdev_raid_sb.o 00:02:59.604 CC module/bdev/lvol/vbdev_lvol.o 00:02:59.604 CC module/bdev/nvme/bdev_mdns_client.o 00:02:59.604 CC module/bdev/raid/raid0.o 00:02:59.604 CC module/bdev/nvme/vbdev_opal.o 00:02:59.604 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:59.604 CC module/bdev/raid/raid1.o 00:02:59.604 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:59.604 CC module/bdev/raid/concat.o 00:02:59.604 CC module/bdev/error/vbdev_error.o 00:02:59.862 CC module/bdev/aio/bdev_aio.o 00:02:59.862 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:59.862 CC module/bdev/error/vbdev_error_rpc.o 00:02:59.862 CC module/bdev/aio/bdev_aio_rpc.o 00:02:59.862 CC module/bdev/split/vbdev_split.o 00:02:59.862 CC module/bdev/split/vbdev_split_rpc.o 00:02:59.862 CC module/bdev/iscsi/bdev_iscsi.o 00:02:59.862 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:59.862 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:59.862 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:59.862 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:59.862 LIB libspdk_blobfs_bdev.a 00:03:00.120 SO libspdk_blobfs_bdev.so.6.0 00:03:00.120 LIB libspdk_bdev_null.a 00:03:00.120 LIB libspdk_bdev_split.a 00:03:00.120 LIB libspdk_bdev_gpt.a 00:03:00.120 SO libspdk_bdev_null.so.6.0 00:03:00.120 LIB libspdk_bdev_ftl.a 00:03:00.120 SO libspdk_bdev_gpt.so.6.0 00:03:00.120 SYMLINK libspdk_blobfs_bdev.so 00:03:00.120 SO libspdk_bdev_split.so.6.0 00:03:00.120 SO libspdk_bdev_ftl.so.6.0 00:03:00.120 LIB libspdk_bdev_error.a 00:03:00.120 SYMLINK libspdk_bdev_null.so 00:03:00.120 LIB libspdk_bdev_passthru.a 00:03:00.120 SYMLINK libspdk_bdev_gpt.so 00:03:00.120 LIB libspdk_bdev_aio.a 00:03:00.120 SYMLINK libspdk_bdev_split.so 00:03:00.120 SYMLINK libspdk_bdev_ftl.so 00:03:00.120 SO libspdk_bdev_error.so.6.0 00:03:00.120 LIB libspdk_bdev_zone_block.a 00:03:00.120 LIB libspdk_bdev_iscsi.a 00:03:00.120 SO libspdk_bdev_passthru.so.6.0 00:03:00.120 SO libspdk_bdev_aio.so.6.0 00:03:00.120 SO libspdk_bdev_zone_block.so.6.0 00:03:00.120 LIB libspdk_bdev_malloc.a 00:03:00.120 LIB libspdk_bdev_delay.a 00:03:00.120 SO libspdk_bdev_iscsi.so.6.0 00:03:00.120 SO libspdk_bdev_malloc.so.6.0 00:03:00.120 SYMLINK libspdk_bdev_error.so 00:03:00.120 SO libspdk_bdev_delay.so.6.0 00:03:00.120 SYMLINK libspdk_bdev_aio.so 00:03:00.120 SYMLINK libspdk_bdev_passthru.so 00:03:00.120 SYMLINK libspdk_bdev_zone_block.so 00:03:00.397 SYMLINK libspdk_bdev_iscsi.so 00:03:00.397 SYMLINK libspdk_bdev_malloc.so 00:03:00.397 SYMLINK libspdk_bdev_delay.so 00:03:00.397 LIB libspdk_bdev_virtio.a 00:03:00.397 LIB libspdk_bdev_lvol.a 00:03:00.397 SO libspdk_bdev_virtio.so.6.0 00:03:00.397 SO libspdk_bdev_lvol.so.6.0 00:03:00.397 SYMLINK libspdk_bdev_virtio.so 00:03:00.397 SYMLINK libspdk_bdev_lvol.so 00:03:00.656 LIB libspdk_bdev_raid.a 00:03:00.656 SO libspdk_bdev_raid.so.6.0 00:03:00.656 SYMLINK libspdk_bdev_raid.so 00:03:01.591 LIB libspdk_bdev_nvme.a 00:03:01.591 SO libspdk_bdev_nvme.so.7.1 00:03:01.850 SYMLINK libspdk_bdev_nvme.so 00:03:02.419 CC module/event/subsystems/vmd/vmd.o 00:03:02.419 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:02.419 CC module/event/subsystems/iobuf/iobuf.o 00:03:02.419 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:02.419 CC module/event/subsystems/keyring/keyring.o 00:03:02.419 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:02.419 CC module/event/subsystems/sock/sock.o 00:03:02.419 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:02.419 CC module/event/subsystems/scheduler/scheduler.o 00:03:02.419 CC module/event/subsystems/fsdev/fsdev.o 00:03:02.679 LIB libspdk_event_vfu_tgt.a 00:03:02.679 LIB libspdk_event_keyring.a 00:03:02.679 LIB libspdk_event_sock.a 00:03:02.679 LIB libspdk_event_vmd.a 00:03:02.679 LIB libspdk_event_vhost_blk.a 00:03:02.679 LIB libspdk_event_scheduler.a 00:03:02.679 LIB libspdk_event_fsdev.a 00:03:02.679 LIB libspdk_event_iobuf.a 00:03:02.679 SO libspdk_event_sock.so.5.0 00:03:02.679 SO libspdk_event_vfu_tgt.so.3.0 00:03:02.679 SO libspdk_event_keyring.so.1.0 00:03:02.679 SO libspdk_event_scheduler.so.4.0 00:03:02.679 SO libspdk_event_vmd.so.6.0 00:03:02.679 SO libspdk_event_vhost_blk.so.3.0 00:03:02.679 SO libspdk_event_fsdev.so.1.0 00:03:02.679 SO libspdk_event_iobuf.so.3.0 00:03:02.679 SYMLINK libspdk_event_keyring.so 00:03:02.679 SYMLINK libspdk_event_vfu_tgt.so 00:03:02.679 SYMLINK libspdk_event_sock.so 00:03:02.679 SYMLINK libspdk_event_vhost_blk.so 00:03:02.679 SYMLINK libspdk_event_scheduler.so 00:03:02.679 SYMLINK libspdk_event_fsdev.so 00:03:02.679 SYMLINK libspdk_event_vmd.so 00:03:02.679 SYMLINK libspdk_event_iobuf.so 00:03:03.247 CC module/event/subsystems/accel/accel.o 00:03:03.247 LIB libspdk_event_accel.a 00:03:03.247 SO libspdk_event_accel.so.6.0 00:03:03.247 SYMLINK libspdk_event_accel.so 00:03:03.815 CC module/event/subsystems/bdev/bdev.o 00:03:03.815 LIB libspdk_event_bdev.a 00:03:03.815 SO libspdk_event_bdev.so.6.0 00:03:03.815 SYMLINK libspdk_event_bdev.so 00:03:04.384 CC module/event/subsystems/scsi/scsi.o 00:03:04.384 CC module/event/subsystems/ublk/ublk.o 00:03:04.384 CC module/event/subsystems/nbd/nbd.o 00:03:04.384 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:04.384 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:04.384 LIB libspdk_event_nbd.a 00:03:04.384 LIB libspdk_event_ublk.a 00:03:04.384 LIB libspdk_event_scsi.a 00:03:04.384 SO libspdk_event_nbd.so.6.0 00:03:04.384 SO libspdk_event_ublk.so.3.0 00:03:04.384 SO libspdk_event_scsi.so.6.0 00:03:04.384 LIB libspdk_event_nvmf.a 00:03:04.384 SO libspdk_event_nvmf.so.6.0 00:03:04.641 SYMLINK libspdk_event_nbd.so 00:03:04.641 SYMLINK libspdk_event_ublk.so 00:03:04.641 SYMLINK libspdk_event_scsi.so 00:03:04.641 SYMLINK libspdk_event_nvmf.so 00:03:04.900 CC module/event/subsystems/iscsi/iscsi.o 00:03:04.900 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:04.900 LIB libspdk_event_vhost_scsi.a 00:03:05.158 SO libspdk_event_vhost_scsi.so.3.0 00:03:05.158 LIB libspdk_event_iscsi.a 00:03:05.158 SO libspdk_event_iscsi.so.6.0 00:03:05.158 SYMLINK libspdk_event_vhost_scsi.so 00:03:05.158 SYMLINK libspdk_event_iscsi.so 00:03:05.417 SO libspdk.so.6.0 00:03:05.417 SYMLINK libspdk.so 00:03:05.676 CC app/trace_record/trace_record.o 00:03:05.676 CC app/spdk_nvme_discover/discovery_aer.o 00:03:05.676 CC app/spdk_lspci/spdk_lspci.o 00:03:05.676 CXX app/trace/trace.o 00:03:05.676 CC app/spdk_nvme_perf/perf.o 00:03:05.676 CC app/spdk_nvme_identify/identify.o 00:03:05.676 CC test/rpc_client/rpc_client_test.o 00:03:05.676 CC app/spdk_top/spdk_top.o 00:03:05.676 TEST_HEADER include/spdk/accel.h 00:03:05.676 TEST_HEADER include/spdk/accel_module.h 00:03:05.676 TEST_HEADER include/spdk/barrier.h 00:03:05.676 TEST_HEADER include/spdk/assert.h 00:03:05.676 TEST_HEADER include/spdk/base64.h 00:03:05.676 TEST_HEADER include/spdk/bdev.h 00:03:05.676 TEST_HEADER include/spdk/bdev_module.h 00:03:05.676 TEST_HEADER include/spdk/bdev_zone.h 00:03:05.676 TEST_HEADER include/spdk/bit_array.h 00:03:05.676 TEST_HEADER include/spdk/bit_pool.h 00:03:05.676 TEST_HEADER include/spdk/blob_bdev.h 00:03:05.676 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:05.676 TEST_HEADER include/spdk/blobfs.h 00:03:05.676 TEST_HEADER include/spdk/blob.h 00:03:05.676 TEST_HEADER include/spdk/conf.h 00:03:05.676 TEST_HEADER include/spdk/config.h 00:03:05.676 TEST_HEADER include/spdk/cpuset.h 00:03:05.676 TEST_HEADER include/spdk/crc16.h 00:03:05.676 TEST_HEADER include/spdk/crc64.h 00:03:05.676 TEST_HEADER include/spdk/crc32.h 00:03:05.676 TEST_HEADER include/spdk/dif.h 00:03:05.676 TEST_HEADER include/spdk/dma.h 00:03:05.676 TEST_HEADER include/spdk/endian.h 00:03:05.676 CC app/iscsi_tgt/iscsi_tgt.o 00:03:05.676 TEST_HEADER include/spdk/env.h 00:03:05.676 TEST_HEADER include/spdk/env_dpdk.h 00:03:05.676 TEST_HEADER include/spdk/event.h 00:03:05.676 TEST_HEADER include/spdk/fd.h 00:03:05.676 TEST_HEADER include/spdk/fd_group.h 00:03:05.676 TEST_HEADER include/spdk/fsdev.h 00:03:05.676 TEST_HEADER include/spdk/file.h 00:03:05.676 TEST_HEADER include/spdk/fsdev_module.h 00:03:05.676 TEST_HEADER include/spdk/ftl.h 00:03:05.676 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:05.676 CC app/spdk_dd/spdk_dd.o 00:03:05.676 TEST_HEADER include/spdk/gpt_spec.h 00:03:05.676 TEST_HEADER include/spdk/hexlify.h 00:03:05.676 TEST_HEADER include/spdk/idxd.h 00:03:05.676 TEST_HEADER include/spdk/idxd_spec.h 00:03:05.676 TEST_HEADER include/spdk/histogram_data.h 00:03:05.676 TEST_HEADER include/spdk/init.h 00:03:05.676 TEST_HEADER include/spdk/iscsi_spec.h 00:03:05.676 TEST_HEADER include/spdk/json.h 00:03:05.676 TEST_HEADER include/spdk/ioat_spec.h 00:03:05.676 TEST_HEADER include/spdk/ioat.h 00:03:05.676 TEST_HEADER include/spdk/keyring.h 00:03:05.676 TEST_HEADER include/spdk/jsonrpc.h 00:03:05.676 TEST_HEADER include/spdk/keyring_module.h 00:03:05.676 TEST_HEADER include/spdk/likely.h 00:03:05.676 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:05.676 TEST_HEADER include/spdk/log.h 00:03:05.676 TEST_HEADER include/spdk/lvol.h 00:03:05.676 TEST_HEADER include/spdk/md5.h 00:03:05.676 TEST_HEADER include/spdk/mmio.h 00:03:05.676 CC app/nvmf_tgt/nvmf_main.o 00:03:05.676 TEST_HEADER include/spdk/memory.h 00:03:05.676 TEST_HEADER include/spdk/nbd.h 00:03:05.676 TEST_HEADER include/spdk/net.h 00:03:05.676 TEST_HEADER include/spdk/nvme.h 00:03:05.676 TEST_HEADER include/spdk/notify.h 00:03:05.676 TEST_HEADER include/spdk/nvme_intel.h 00:03:05.676 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:05.676 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:05.676 TEST_HEADER include/spdk/nvme_spec.h 00:03:05.676 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:05.676 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:05.676 TEST_HEADER include/spdk/nvme_zns.h 00:03:05.676 TEST_HEADER include/spdk/nvmf.h 00:03:05.677 TEST_HEADER include/spdk/nvmf_transport.h 00:03:05.677 TEST_HEADER include/spdk/nvmf_spec.h 00:03:05.677 TEST_HEADER include/spdk/opal.h 00:03:05.677 TEST_HEADER include/spdk/opal_spec.h 00:03:05.677 TEST_HEADER include/spdk/pci_ids.h 00:03:05.677 TEST_HEADER include/spdk/pipe.h 00:03:05.677 TEST_HEADER include/spdk/reduce.h 00:03:05.677 TEST_HEADER include/spdk/queue.h 00:03:05.677 TEST_HEADER include/spdk/rpc.h 00:03:05.677 TEST_HEADER include/spdk/scheduler.h 00:03:05.677 TEST_HEADER include/spdk/scsi.h 00:03:05.677 TEST_HEADER include/spdk/scsi_spec.h 00:03:05.677 TEST_HEADER include/spdk/sock.h 00:03:05.677 TEST_HEADER include/spdk/stdinc.h 00:03:05.677 TEST_HEADER include/spdk/string.h 00:03:05.677 TEST_HEADER include/spdk/trace.h 00:03:05.677 TEST_HEADER include/spdk/thread.h 00:03:05.677 TEST_HEADER include/spdk/trace_parser.h 00:03:05.677 TEST_HEADER include/spdk/tree.h 00:03:05.677 TEST_HEADER include/spdk/util.h 00:03:05.677 TEST_HEADER include/spdk/ublk.h 00:03:05.677 TEST_HEADER include/spdk/uuid.h 00:03:05.677 TEST_HEADER include/spdk/version.h 00:03:05.677 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:05.677 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:05.677 TEST_HEADER include/spdk/vhost.h 00:03:05.677 TEST_HEADER include/spdk/vmd.h 00:03:05.677 TEST_HEADER include/spdk/zipf.h 00:03:05.677 CC app/spdk_tgt/spdk_tgt.o 00:03:05.677 TEST_HEADER include/spdk/xor.h 00:03:05.677 CXX test/cpp_headers/accel_module.o 00:03:05.677 CXX test/cpp_headers/assert.o 00:03:05.677 CXX test/cpp_headers/accel.o 00:03:05.677 CXX test/cpp_headers/barrier.o 00:03:05.677 CXX test/cpp_headers/base64.o 00:03:05.677 CXX test/cpp_headers/bdev_module.o 00:03:05.677 CXX test/cpp_headers/bdev.o 00:03:05.677 CXX test/cpp_headers/bdev_zone.o 00:03:05.677 CXX test/cpp_headers/bit_pool.o 00:03:05.677 CXX test/cpp_headers/bit_array.o 00:03:05.677 CXX test/cpp_headers/blob_bdev.o 00:03:05.677 CXX test/cpp_headers/blobfs_bdev.o 00:03:05.677 CXX test/cpp_headers/blobfs.o 00:03:05.677 CXX test/cpp_headers/conf.o 00:03:05.677 CXX test/cpp_headers/blob.o 00:03:05.677 CXX test/cpp_headers/config.o 00:03:05.946 CXX test/cpp_headers/cpuset.o 00:03:05.946 CXX test/cpp_headers/crc32.o 00:03:05.946 CXX test/cpp_headers/crc16.o 00:03:05.946 CXX test/cpp_headers/crc64.o 00:03:05.946 CXX test/cpp_headers/dif.o 00:03:05.946 CXX test/cpp_headers/dma.o 00:03:05.946 CXX test/cpp_headers/endian.o 00:03:05.946 CXX test/cpp_headers/env_dpdk.o 00:03:05.946 CXX test/cpp_headers/env.o 00:03:05.946 CXX test/cpp_headers/event.o 00:03:05.946 CXX test/cpp_headers/fd_group.o 00:03:05.946 CXX test/cpp_headers/fd.o 00:03:05.946 CXX test/cpp_headers/fsdev.o 00:03:05.946 CXX test/cpp_headers/file.o 00:03:05.946 CXX test/cpp_headers/ftl.o 00:03:05.946 CXX test/cpp_headers/fsdev_module.o 00:03:05.946 CXX test/cpp_headers/gpt_spec.o 00:03:05.946 CXX test/cpp_headers/hexlify.o 00:03:05.946 CXX test/cpp_headers/fuse_dispatcher.o 00:03:05.946 CXX test/cpp_headers/idxd.o 00:03:05.946 CXX test/cpp_headers/idxd_spec.o 00:03:05.946 CXX test/cpp_headers/histogram_data.o 00:03:05.946 CXX test/cpp_headers/init.o 00:03:05.946 CXX test/cpp_headers/ioat.o 00:03:05.946 CXX test/cpp_headers/iscsi_spec.o 00:03:05.946 CXX test/cpp_headers/ioat_spec.o 00:03:05.946 CXX test/cpp_headers/json.o 00:03:05.946 CXX test/cpp_headers/jsonrpc.o 00:03:05.946 CXX test/cpp_headers/keyring.o 00:03:05.946 CXX test/cpp_headers/likely.o 00:03:05.946 CXX test/cpp_headers/keyring_module.o 00:03:05.946 CXX test/cpp_headers/log.o 00:03:05.946 CXX test/cpp_headers/lvol.o 00:03:05.946 CXX test/cpp_headers/memory.o 00:03:05.946 CXX test/cpp_headers/md5.o 00:03:05.947 CXX test/cpp_headers/mmio.o 00:03:05.947 CXX test/cpp_headers/nbd.o 00:03:05.947 CXX test/cpp_headers/net.o 00:03:05.947 CXX test/cpp_headers/nvme.o 00:03:05.947 CXX test/cpp_headers/notify.o 00:03:05.947 CXX test/cpp_headers/nvme_intel.o 00:03:05.947 CXX test/cpp_headers/nvme_ocssd.o 00:03:05.947 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:05.947 CXX test/cpp_headers/nvme_spec.o 00:03:05.947 CXX test/cpp_headers/nvme_zns.o 00:03:05.947 CXX test/cpp_headers/nvmf_cmd.o 00:03:05.947 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:05.947 CXX test/cpp_headers/nvmf.o 00:03:05.947 CXX test/cpp_headers/nvmf_transport.o 00:03:05.947 CXX test/cpp_headers/nvmf_spec.o 00:03:05.947 CXX test/cpp_headers/opal.o 00:03:05.947 CC examples/util/zipf/zipf.o 00:03:05.947 CC test/env/memory/memory_ut.o 00:03:05.947 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:05.947 CC examples/ioat/verify/verify.o 00:03:05.947 CC test/app/histogram_perf/histogram_perf.o 00:03:05.947 CC test/env/vtophys/vtophys.o 00:03:05.947 CXX test/cpp_headers/opal_spec.o 00:03:05.947 CC test/env/pci/pci_ut.o 00:03:05.947 CC test/app/jsoncat/jsoncat.o 00:03:05.947 CC test/app/stub/stub.o 00:03:05.947 CC test/dma/test_dma/test_dma.o 00:03:05.947 CC test/thread/poller_perf/poller_perf.o 00:03:05.947 CC examples/ioat/perf/perf.o 00:03:05.947 CC app/fio/bdev/fio_plugin.o 00:03:05.947 CC app/fio/nvme/fio_plugin.o 00:03:06.218 CC test/app/bdev_svc/bdev_svc.o 00:03:06.218 LINK spdk_lspci 00:03:06.218 LINK spdk_nvme_discover 00:03:06.218 LINK rpc_client_test 00:03:06.218 LINK iscsi_tgt 00:03:06.218 LINK nvmf_tgt 00:03:06.218 LINK interrupt_tgt 00:03:06.478 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:06.478 LINK vtophys 00:03:06.478 CXX test/cpp_headers/pci_ids.o 00:03:06.478 LINK histogram_perf 00:03:06.478 CXX test/cpp_headers/pipe.o 00:03:06.478 CXX test/cpp_headers/queue.o 00:03:06.478 CC test/env/mem_callbacks/mem_callbacks.o 00:03:06.478 CXX test/cpp_headers/reduce.o 00:03:06.478 CXX test/cpp_headers/scheduler.o 00:03:06.478 CXX test/cpp_headers/rpc.o 00:03:06.478 CXX test/cpp_headers/scsi_spec.o 00:03:06.478 CXX test/cpp_headers/scsi.o 00:03:06.478 LINK env_dpdk_post_init 00:03:06.478 CXX test/cpp_headers/stdinc.o 00:03:06.478 CXX test/cpp_headers/sock.o 00:03:06.478 CXX test/cpp_headers/thread.o 00:03:06.478 CXX test/cpp_headers/string.o 00:03:06.478 CXX test/cpp_headers/trace.o 00:03:06.478 CXX test/cpp_headers/trace_parser.o 00:03:06.478 CXX test/cpp_headers/tree.o 00:03:06.478 LINK poller_perf 00:03:06.478 CXX test/cpp_headers/ublk.o 00:03:06.478 CXX test/cpp_headers/util.o 00:03:06.478 CXX test/cpp_headers/uuid.o 00:03:06.478 CXX test/cpp_headers/version.o 00:03:06.478 LINK spdk_trace_record 00:03:06.478 CXX test/cpp_headers/vfio_user_pci.o 00:03:06.478 CXX test/cpp_headers/vfio_user_spec.o 00:03:06.478 LINK stub 00:03:06.478 CXX test/cpp_headers/vhost.o 00:03:06.478 CXX test/cpp_headers/vmd.o 00:03:06.478 CXX test/cpp_headers/xor.o 00:03:06.478 LINK verify 00:03:06.478 LINK spdk_dd 00:03:06.478 CXX test/cpp_headers/zipf.o 00:03:06.478 LINK bdev_svc 00:03:06.478 LINK jsoncat 00:03:06.478 LINK zipf 00:03:06.737 LINK spdk_tgt 00:03:06.737 LINK ioat_perf 00:03:06.737 LINK spdk_trace 00:03:06.737 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:06.737 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:06.737 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:06.995 LINK pci_ut 00:03:06.995 LINK test_dma 00:03:06.995 LINK spdk_top 00:03:06.995 LINK spdk_bdev 00:03:06.995 LINK spdk_nvme 00:03:06.995 CC app/vhost/vhost.o 00:03:06.995 LINK nvme_fuzz 00:03:06.995 CC test/event/reactor/reactor.o 00:03:06.995 LINK spdk_nvme_identify 00:03:06.995 CC test/event/event_perf/event_perf.o 00:03:06.995 CC examples/idxd/perf/perf.o 00:03:06.995 CC test/event/reactor_perf/reactor_perf.o 00:03:06.995 CC test/event/app_repeat/app_repeat.o 00:03:07.253 CC examples/vmd/lsvmd/lsvmd.o 00:03:07.253 LINK vhost_fuzz 00:03:07.253 CC test/event/scheduler/scheduler.o 00:03:07.253 CC examples/vmd/led/led.o 00:03:07.253 CC examples/sock/hello_world/hello_sock.o 00:03:07.253 LINK spdk_nvme_perf 00:03:07.253 CC examples/thread/thread/thread_ex.o 00:03:07.253 LINK reactor 00:03:07.253 LINK reactor_perf 00:03:07.253 LINK event_perf 00:03:07.253 LINK mem_callbacks 00:03:07.253 LINK vhost 00:03:07.253 LINK lsvmd 00:03:07.253 LINK app_repeat 00:03:07.253 LINK led 00:03:07.512 LINK scheduler 00:03:07.512 LINK hello_sock 00:03:07.512 LINK idxd_perf 00:03:07.512 LINK thread 00:03:07.512 CC test/nvme/sgl/sgl.o 00:03:07.512 CC test/nvme/aer/aer.o 00:03:07.512 CC test/nvme/err_injection/err_injection.o 00:03:07.512 CC test/nvme/boot_partition/boot_partition.o 00:03:07.512 CC test/nvme/simple_copy/simple_copy.o 00:03:07.512 CC test/nvme/reset/reset.o 00:03:07.512 CC test/nvme/fused_ordering/fused_ordering.o 00:03:07.512 CC test/nvme/compliance/nvme_compliance.o 00:03:07.512 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:07.512 CC test/nvme/reserve/reserve.o 00:03:07.512 CC test/nvme/fdp/fdp.o 00:03:07.512 CC test/nvme/overhead/overhead.o 00:03:07.512 CC test/nvme/startup/startup.o 00:03:07.512 CC test/nvme/cuse/cuse.o 00:03:07.512 CC test/nvme/connect_stress/connect_stress.o 00:03:07.512 CC test/nvme/e2edp/nvme_dp.o 00:03:07.512 CC test/accel/dif/dif.o 00:03:07.512 LINK memory_ut 00:03:07.512 CC test/blobfs/mkfs/mkfs.o 00:03:07.771 CC test/lvol/esnap/esnap.o 00:03:07.771 LINK boot_partition 00:03:07.771 LINK err_injection 00:03:07.771 LINK startup 00:03:07.771 LINK doorbell_aers 00:03:07.771 LINK connect_stress 00:03:07.771 LINK reserve 00:03:07.771 LINK fused_ordering 00:03:07.771 LINK sgl 00:03:07.771 LINK simple_copy 00:03:07.771 LINK reset 00:03:07.771 LINK aer 00:03:07.771 LINK mkfs 00:03:07.771 LINK nvme_dp 00:03:07.771 LINK overhead 00:03:07.771 LINK fdp 00:03:07.771 CC examples/nvme/reconnect/reconnect.o 00:03:07.771 LINK nvme_compliance 00:03:07.771 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:07.771 CC examples/nvme/arbitration/arbitration.o 00:03:07.771 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:07.771 CC examples/nvme/abort/abort.o 00:03:07.771 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:07.771 CC examples/nvme/hello_world/hello_world.o 00:03:07.771 CC examples/nvme/hotplug/hotplug.o 00:03:08.029 CC examples/accel/perf/accel_perf.o 00:03:08.029 CC examples/blob/cli/blobcli.o 00:03:08.029 CC examples/blob/hello_world/hello_blob.o 00:03:08.029 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:08.029 LINK cmb_copy 00:03:08.029 LINK pmr_persistence 00:03:08.029 LINK hotplug 00:03:08.029 LINK hello_world 00:03:08.288 LINK dif 00:03:08.288 LINK arbitration 00:03:08.288 LINK iscsi_fuzz 00:03:08.288 LINK reconnect 00:03:08.288 LINK abort 00:03:08.288 LINK hello_blob 00:03:08.288 LINK hello_fsdev 00:03:08.288 LINK nvme_manage 00:03:08.288 LINK accel_perf 00:03:08.546 LINK blobcli 00:03:08.546 LINK cuse 00:03:08.805 CC test/bdev/bdevio/bdevio.o 00:03:08.805 CC examples/bdev/bdevperf/bdevperf.o 00:03:08.805 CC examples/bdev/hello_world/hello_bdev.o 00:03:09.064 LINK bdevio 00:03:09.064 LINK hello_bdev 00:03:09.632 LINK bdevperf 00:03:10.200 CC examples/nvmf/nvmf/nvmf.o 00:03:10.200 LINK nvmf 00:03:11.579 LINK esnap 00:03:11.579 00:03:11.579 real 0m56.960s 00:03:11.579 user 8m24.369s 00:03:11.579 sys 3m56.854s 00:03:11.579 15:36:06 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:11.579 15:36:06 make -- common/autotest_common.sh@10 -- $ set +x 00:03:11.579 ************************************ 00:03:11.579 END TEST make 00:03:11.579 ************************************ 00:03:11.579 15:36:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:11.579 15:36:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:11.579 15:36:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:11.579 15:36:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.579 15:36:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:11.579 15:36:06 -- pm/common@44 -- $ pid=1722304 00:03:11.579 15:36:06 -- pm/common@50 -- $ kill -TERM 1722304 00:03:11.579 15:36:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.579 15:36:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:11.579 15:36:06 -- pm/common@44 -- $ pid=1722305 00:03:11.579 15:36:06 -- pm/common@50 -- $ kill -TERM 1722305 00:03:11.579 15:36:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.579 15:36:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:11.579 15:36:06 -- pm/common@44 -- $ pid=1722307 00:03:11.579 15:36:06 -- pm/common@50 -- $ kill -TERM 1722307 00:03:11.579 15:36:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.579 15:36:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:11.579 15:36:06 -- pm/common@44 -- $ pid=1722332 00:03:11.579 15:36:06 -- pm/common@50 -- $ sudo -E kill -TERM 1722332 00:03:11.579 15:36:06 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:11.579 15:36:06 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:11.839 15:36:06 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:11.839 15:36:06 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:11.839 15:36:06 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:11.839 15:36:06 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:11.839 15:36:06 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:11.839 15:36:06 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:11.839 15:36:06 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:11.839 15:36:06 -- scripts/common.sh@336 -- # IFS=.-: 00:03:11.839 15:36:06 -- scripts/common.sh@336 -- # read -ra ver1 00:03:11.839 15:36:06 -- scripts/common.sh@337 -- # IFS=.-: 00:03:11.839 15:36:06 -- scripts/common.sh@337 -- # read -ra ver2 00:03:11.839 15:36:06 -- scripts/common.sh@338 -- # local 'op=<' 00:03:11.839 15:36:06 -- scripts/common.sh@340 -- # ver1_l=2 00:03:11.839 15:36:06 -- scripts/common.sh@341 -- # ver2_l=1 00:03:11.839 15:36:06 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:11.839 15:36:06 -- scripts/common.sh@344 -- # case "$op" in 00:03:11.839 15:36:06 -- scripts/common.sh@345 -- # : 1 00:03:11.839 15:36:06 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:11.839 15:36:06 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:11.839 15:36:06 -- scripts/common.sh@365 -- # decimal 1 00:03:11.839 15:36:06 -- scripts/common.sh@353 -- # local d=1 00:03:11.839 15:36:06 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:11.839 15:36:06 -- scripts/common.sh@355 -- # echo 1 00:03:11.839 15:36:06 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:11.839 15:36:06 -- scripts/common.sh@366 -- # decimal 2 00:03:11.839 15:36:06 -- scripts/common.sh@353 -- # local d=2 00:03:11.839 15:36:06 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:11.839 15:36:06 -- scripts/common.sh@355 -- # echo 2 00:03:11.839 15:36:06 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:11.839 15:36:06 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:11.839 15:36:06 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:11.839 15:36:06 -- scripts/common.sh@368 -- # return 0 00:03:11.839 15:36:06 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:11.839 15:36:06 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:11.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.839 --rc genhtml_branch_coverage=1 00:03:11.839 --rc genhtml_function_coverage=1 00:03:11.839 --rc genhtml_legend=1 00:03:11.839 --rc geninfo_all_blocks=1 00:03:11.839 --rc geninfo_unexecuted_blocks=1 00:03:11.839 00:03:11.839 ' 00:03:11.839 15:36:06 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:11.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.839 --rc genhtml_branch_coverage=1 00:03:11.839 --rc genhtml_function_coverage=1 00:03:11.839 --rc genhtml_legend=1 00:03:11.839 --rc geninfo_all_blocks=1 00:03:11.839 --rc geninfo_unexecuted_blocks=1 00:03:11.839 00:03:11.839 ' 00:03:11.839 15:36:06 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:11.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.839 --rc genhtml_branch_coverage=1 00:03:11.839 --rc genhtml_function_coverage=1 00:03:11.839 --rc genhtml_legend=1 00:03:11.839 --rc geninfo_all_blocks=1 00:03:11.839 --rc geninfo_unexecuted_blocks=1 00:03:11.839 00:03:11.839 ' 00:03:11.839 15:36:06 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:11.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.839 --rc genhtml_branch_coverage=1 00:03:11.839 --rc genhtml_function_coverage=1 00:03:11.839 --rc genhtml_legend=1 00:03:11.839 --rc geninfo_all_blocks=1 00:03:11.839 --rc geninfo_unexecuted_blocks=1 00:03:11.839 00:03:11.839 ' 00:03:11.839 15:36:06 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:11.840 15:36:06 -- nvmf/common.sh@7 -- # uname -s 00:03:11.840 15:36:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:11.840 15:36:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:11.840 15:36:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:11.840 15:36:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:11.840 15:36:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:11.840 15:36:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:11.840 15:36:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:11.840 15:36:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:11.840 15:36:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:11.840 15:36:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:11.840 15:36:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:03:11.840 15:36:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:03:11.840 15:36:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:11.840 15:36:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:11.840 15:36:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:11.840 15:36:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:11.840 15:36:06 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:11.840 15:36:06 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:11.840 15:36:06 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:11.840 15:36:06 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:11.840 15:36:06 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:11.840 15:36:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.840 15:36:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.840 15:36:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.840 15:36:06 -- paths/export.sh@5 -- # export PATH 00:03:11.840 15:36:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.840 15:36:06 -- nvmf/common.sh@51 -- # : 0 00:03:11.840 15:36:06 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:11.840 15:36:06 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:11.840 15:36:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:11.840 15:36:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:11.840 15:36:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:11.840 15:36:06 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:11.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:11.840 15:36:06 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:11.840 15:36:06 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:11.840 15:36:06 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:11.840 15:36:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:11.840 15:36:06 -- spdk/autotest.sh@32 -- # uname -s 00:03:11.840 15:36:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:11.840 15:36:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:11.840 15:36:06 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:11.840 15:36:06 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:11.840 15:36:06 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:11.840 15:36:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:11.840 15:36:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:11.840 15:36:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:11.840 15:36:06 -- spdk/autotest.sh@48 -- # udevadm_pid=1786357 00:03:11.840 15:36:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:11.840 15:36:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:11.840 15:36:06 -- pm/common@17 -- # local monitor 00:03:11.840 15:36:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.840 15:36:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.840 15:36:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.840 15:36:06 -- pm/common@21 -- # date +%s 00:03:11.840 15:36:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.840 15:36:06 -- pm/common@21 -- # date +%s 00:03:11.840 15:36:06 -- pm/common@25 -- # sleep 1 00:03:11.840 15:36:06 -- pm/common@21 -- # date +%s 00:03:11.840 15:36:06 -- pm/common@21 -- # date +%s 00:03:11.840 15:36:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733754966 00:03:11.840 15:36:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733754966 00:03:11.840 15:36:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733754966 00:03:11.840 15:36:06 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733754966 00:03:11.840 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733754966_collect-cpu-load.pm.log 00:03:11.840 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733754966_collect-vmstat.pm.log 00:03:11.840 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733754966_collect-cpu-temp.pm.log 00:03:11.840 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733754966_collect-bmc-pm.bmc.pm.log 00:03:12.779 15:36:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:12.779 15:36:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:12.779 15:36:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:12.779 15:36:07 -- common/autotest_common.sh@10 -- # set +x 00:03:12.779 15:36:07 -- spdk/autotest.sh@59 -- # create_test_list 00:03:12.779 15:36:07 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:12.779 15:36:07 -- common/autotest_common.sh@10 -- # set +x 00:03:13.038 15:36:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:13.038 15:36:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:13.038 15:36:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:13.038 15:36:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:13.038 15:36:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:13.038 15:36:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:13.038 15:36:08 -- common/autotest_common.sh@1457 -- # uname 00:03:13.038 15:36:08 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:13.038 15:36:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:13.038 15:36:08 -- common/autotest_common.sh@1477 -- # uname 00:03:13.038 15:36:08 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:13.038 15:36:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:13.038 15:36:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:13.038 lcov: LCOV version 1.15 00:03:13.038 15:36:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:25.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:25.248 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:40.243 15:36:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:40.243 15:36:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:40.243 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:03:40.243 15:36:32 -- spdk/autotest.sh@78 -- # rm -f 00:03:40.243 15:36:32 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.836 0000:5f:00.0 (1b96 2600): Already using the nvme driver 00:03:40.836 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:40.836 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:40.836 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:40.836 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:40.836 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:40.836 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:40.836 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:40.836 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:40.836 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:40.836 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:40.836 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:40.836 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:40.836 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:41.094 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:41.094 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:41.094 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:41.094 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:41.094 15:36:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:41.094 15:36:36 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:41.094 15:36:36 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:41.094 15:36:36 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:41.094 15:36:36 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:41.094 15:36:36 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:41.094 15:36:36 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:41.094 15:36:36 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:03:41.094 15:36:36 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:41.094 15:36:36 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:41.094 15:36:36 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:41.094 15:36:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:41.094 15:36:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:41.094 15:36:36 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:41.094 15:36:36 -- common/autotest_common.sh@1669 -- # bdf=0000:5f:00.0 00:03:41.094 15:36:36 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:41.094 15:36:36 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:41.094 15:36:36 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:41.094 15:36:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:41.094 15:36:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:41.094 15:36:36 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:41.094 15:36:36 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:03:41.094 15:36:36 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:41.094 15:36:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:41.094 15:36:36 -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:03:41.094 15:36:36 -- common/autotest_common.sh@1672 -- # zoned_ctrls["$nvme"]=0000:5f:00.0 00:03:41.095 15:36:36 -- common/autotest_common.sh@1673 -- # continue 2 00:03:41.095 15:36:36 -- common/autotest_common.sh@1678 -- # for nvme in "${!zoned_ctrls[@]}" 00:03:41.095 15:36:36 -- common/autotest_common.sh@1679 -- # for ns in "$nvme/"nvme*n* 00:03:41.095 15:36:36 -- common/autotest_common.sh@1680 -- # zoned_devs["${ns##*/}"]=0000:5f:00.0 00:03:41.095 15:36:36 -- common/autotest_common.sh@1679 -- # for ns in "$nvme/"nvme*n* 00:03:41.095 15:36:36 -- common/autotest_common.sh@1680 -- # zoned_devs["${ns##*/}"]=0000:5f:00.0 00:03:41.095 15:36:36 -- spdk/autotest.sh@85 -- # (( 2 > 0 )) 00:03:41.095 15:36:36 -- spdk/autotest.sh@90 -- # export 'PCI_BLOCKED=0000:5f:00.0 0000:5f:00.0' 00:03:41.095 15:36:36 -- spdk/autotest.sh@90 -- # PCI_BLOCKED='0000:5f:00.0 0000:5f:00.0' 00:03:41.095 15:36:36 -- spdk/autotest.sh@91 -- # export 'PCI_ZONED=0000:5f:00.0 0000:5f:00.0' 00:03:41.095 15:36:36 -- spdk/autotest.sh@91 -- # PCI_ZONED='0000:5f:00.0 0000:5f:00.0' 00:03:41.095 15:36:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.095 15:36:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:41.095 15:36:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:41.095 15:36:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:41.095 15:36:36 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:41.095 No valid GPT data, bailing 00:03:41.095 15:36:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:41.095 15:36:36 -- scripts/common.sh@394 -- # pt= 00:03:41.095 15:36:36 -- scripts/common.sh@395 -- # return 1 00:03:41.095 15:36:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:41.095 1+0 records in 00:03:41.095 1+0 records out 00:03:41.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00155289 s, 675 MB/s 00:03:41.095 15:36:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.095 15:36:36 -- spdk/autotest.sh@99 -- # [[ -z 0000:5f:00.0 ]] 00:03:41.095 15:36:36 -- spdk/autotest.sh@99 -- # continue 00:03:41.095 15:36:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.095 15:36:36 -- spdk/autotest.sh@99 -- # [[ -z 0000:5f:00.0 ]] 00:03:41.095 15:36:36 -- spdk/autotest.sh@99 -- # continue 00:03:41.095 15:36:36 -- spdk/autotest.sh@105 -- # sync 00:03:41.095 15:36:36 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:41.095 15:36:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:41.095 15:36:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:47.666 15:36:41 -- spdk/autotest.sh@111 -- # uname -s 00:03:47.666 15:36:41 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:47.666 15:36:41 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:47.666 15:36:41 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:49.574 Hugepages 00:03:49.574 node hugesize free / total 00:03:49.574 node0 1048576kB 0 / 0 00:03:49.574 node0 2048kB 0 / 0 00:03:49.574 node1 1048576kB 0 / 0 00:03:49.574 node1 2048kB 0 / 0 00:03:49.574 00:03:49.574 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:49.574 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:49.574 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:49.574 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:49.574 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:49.574 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:49.574 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:49.574 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:49.574 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:49.574 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:49.833 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:03:49.833 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:49.833 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:49.833 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:49.833 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:49.833 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:49.833 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:49.833 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:49.833 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:49.833 15:36:44 -- spdk/autotest.sh@117 -- # uname -s 00:03:49.833 15:36:44 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:49.833 15:36:44 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:49.833 15:36:44 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.367 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:52.936 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:52.936 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:52.936 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:52.936 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:52.936 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:52.936 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:52.936 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:52.936 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:52.936 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:52.936 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:52.936 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:52.936 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:52.936 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:52.936 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:52.936 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:52.936 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:53.873 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.873 15:36:49 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:54.811 15:36:50 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:54.811 15:36:50 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:54.811 15:36:50 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:54.811 15:36:50 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:54.811 15:36:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:54.811 15:36:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:54.811 15:36:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:54.811 15:36:50 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:54.811 15:36:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:55.070 15:36:50 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:55.070 15:36:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:55.070 15:36:50 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.608 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:57.867 Waiting for block devices as requested 00:03:57.867 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:58.126 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:58.126 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:58.126 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:58.385 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:58.385 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:58.385 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:58.644 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:58.644 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:58.644 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:58.904 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:58.904 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:58.904 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:58.904 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:59.164 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:59.164 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:59.164 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:59.423 15:36:54 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:59.423 15:36:54 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:59.423 15:36:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:59.423 15:36:54 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:59.423 15:36:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:59.423 15:36:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:59.423 15:36:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:59.423 15:36:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:59.423 15:36:54 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:59.423 15:36:54 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:59.423 15:36:54 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:59.423 15:36:54 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:59.423 15:36:54 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:59.423 15:36:54 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:59.423 15:36:54 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:59.423 15:36:54 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:59.423 15:36:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:59.423 15:36:54 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:59.423 15:36:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:59.423 15:36:54 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:59.423 15:36:54 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:59.423 15:36:54 -- common/autotest_common.sh@1543 -- # continue 00:03:59.423 15:36:54 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:59.423 15:36:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:59.423 15:36:54 -- common/autotest_common.sh@10 -- # set +x 00:03:59.423 15:36:54 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:59.423 15:36:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:59.423 15:36:54 -- common/autotest_common.sh@10 -- # set +x 00:03:59.423 15:36:54 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.961 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:02.531 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:02.531 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:02.531 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:02.531 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:02.531 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:02.531 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:02.531 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:02.531 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:02.531 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:02.531 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:02.531 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:02.531 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:02.531 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:02.531 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:02.531 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:02.531 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:03.469 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:03.469 15:36:58 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:03.469 15:36:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:03.469 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:04:03.469 15:36:58 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:03.469 15:36:58 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:03.469 15:36:58 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:03.469 15:36:58 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:03.469 15:36:58 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:03.469 15:36:58 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:03.469 15:36:58 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:03.469 15:36:58 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:03.469 15:36:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:03.469 15:36:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:03.469 15:36:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.469 15:36:58 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:03.469 15:36:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:03.728 15:36:58 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:03.728 15:36:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:03.728 15:36:58 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:03.728 15:36:58 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:03.728 15:36:58 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:03.728 15:36:58 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:03.728 15:36:58 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:03.728 15:36:58 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:03.728 15:36:58 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:03.728 15:36:58 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:03.728 15:36:58 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1801140 00:04:03.728 15:36:58 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.728 15:36:58 -- common/autotest_common.sh@1585 -- # waitforlisten 1801140 00:04:03.728 15:36:58 -- common/autotest_common.sh@835 -- # '[' -z 1801140 ']' 00:04:03.728 15:36:58 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.728 15:36:58 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.728 15:36:58 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.728 15:36:58 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.728 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:04:03.728 [2024-12-09 15:36:58.763531] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:04:03.728 [2024-12-09 15:36:58.763575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1801140 ] 00:04:03.729 [2024-12-09 15:36:58.835939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.729 [2024-12-09 15:36:58.875170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.988 15:36:59 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.988 15:36:59 -- common/autotest_common.sh@868 -- # return 0 00:04:03.988 15:36:59 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:03.988 15:36:59 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:03.988 15:36:59 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:07.276 nvme0n1 00:04:07.276 15:37:02 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:07.276 [2024-12-09 15:37:02.288554] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 1 00:04:07.276 [2024-12-09 15:37:02.288583] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 1 00:04:07.276 request: 00:04:07.276 { 00:04:07.276 "nvme_ctrlr_name": "nvme0", 00:04:07.276 "password": "test", 00:04:07.276 "method": "bdev_nvme_opal_revert", 00:04:07.276 "req_id": 1 00:04:07.276 } 00:04:07.276 Got JSON-RPC error response 00:04:07.276 response: 00:04:07.276 { 00:04:07.276 "code": -32603, 00:04:07.276 "message": "Internal error" 00:04:07.276 } 00:04:07.276 15:37:02 -- common/autotest_common.sh@1591 -- # true 00:04:07.276 15:37:02 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:07.276 15:37:02 -- common/autotest_common.sh@1595 -- # killprocess 1801140 00:04:07.276 15:37:02 -- common/autotest_common.sh@954 -- # '[' -z 1801140 ']' 00:04:07.276 15:37:02 -- common/autotest_common.sh@958 -- # kill -0 1801140 00:04:07.276 15:37:02 -- common/autotest_common.sh@959 -- # uname 00:04:07.276 15:37:02 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.276 15:37:02 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1801140 00:04:07.276 15:37:02 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.276 15:37:02 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.276 15:37:02 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1801140' 00:04:07.276 killing process with pid 1801140 00:04:07.276 15:37:02 -- common/autotest_common.sh@973 -- # kill 1801140 00:04:07.276 15:37:02 -- common/autotest_common.sh@978 -- # wait 1801140 00:04:09.179 15:37:03 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:09.179 15:37:03 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:09.179 15:37:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:09.179 15:37:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:09.179 15:37:03 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:09.179 15:37:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.179 15:37:03 -- common/autotest_common.sh@10 -- # set +x 00:04:09.179 15:37:03 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:09.179 15:37:03 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:09.179 15:37:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.179 15:37:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.179 15:37:03 -- common/autotest_common.sh@10 -- # set +x 00:04:09.179 ************************************ 00:04:09.179 START TEST env 00:04:09.179 ************************************ 00:04:09.179 15:37:04 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:09.179 * Looking for test storage... 00:04:09.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:09.179 15:37:04 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:09.179 15:37:04 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:09.179 15:37:04 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:09.179 15:37:04 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:09.179 15:37:04 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.179 15:37:04 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.179 15:37:04 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.179 15:37:04 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.179 15:37:04 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.179 15:37:04 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.179 15:37:04 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.179 15:37:04 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.179 15:37:04 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.179 15:37:04 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.179 15:37:04 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.179 15:37:04 env -- scripts/common.sh@344 -- # case "$op" in 00:04:09.179 15:37:04 env -- scripts/common.sh@345 -- # : 1 00:04:09.179 15:37:04 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.179 15:37:04 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.179 15:37:04 env -- scripts/common.sh@365 -- # decimal 1 00:04:09.179 15:37:04 env -- scripts/common.sh@353 -- # local d=1 00:04:09.179 15:37:04 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.179 15:37:04 env -- scripts/common.sh@355 -- # echo 1 00:04:09.179 15:37:04 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.179 15:37:04 env -- scripts/common.sh@366 -- # decimal 2 00:04:09.179 15:37:04 env -- scripts/common.sh@353 -- # local d=2 00:04:09.179 15:37:04 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.179 15:37:04 env -- scripts/common.sh@355 -- # echo 2 00:04:09.179 15:37:04 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.179 15:37:04 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.179 15:37:04 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.179 15:37:04 env -- scripts/common.sh@368 -- # return 0 00:04:09.179 15:37:04 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.179 15:37:04 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:09.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.179 --rc genhtml_branch_coverage=1 00:04:09.179 --rc genhtml_function_coverage=1 00:04:09.179 --rc genhtml_legend=1 00:04:09.179 --rc geninfo_all_blocks=1 00:04:09.179 --rc geninfo_unexecuted_blocks=1 00:04:09.179 00:04:09.179 ' 00:04:09.179 15:37:04 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:09.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.179 --rc genhtml_branch_coverage=1 00:04:09.179 --rc genhtml_function_coverage=1 00:04:09.179 --rc genhtml_legend=1 00:04:09.179 --rc geninfo_all_blocks=1 00:04:09.179 --rc geninfo_unexecuted_blocks=1 00:04:09.179 00:04:09.179 ' 00:04:09.179 15:37:04 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:09.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.179 --rc genhtml_branch_coverage=1 00:04:09.179 --rc genhtml_function_coverage=1 00:04:09.179 --rc genhtml_legend=1 00:04:09.179 --rc geninfo_all_blocks=1 00:04:09.179 --rc geninfo_unexecuted_blocks=1 00:04:09.179 00:04:09.179 ' 00:04:09.179 15:37:04 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:09.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.179 --rc genhtml_branch_coverage=1 00:04:09.179 --rc genhtml_function_coverage=1 00:04:09.179 --rc genhtml_legend=1 00:04:09.179 --rc geninfo_all_blocks=1 00:04:09.179 --rc geninfo_unexecuted_blocks=1 00:04:09.179 00:04:09.179 ' 00:04:09.179 15:37:04 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:09.179 15:37:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.179 15:37:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.179 15:37:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.179 ************************************ 00:04:09.179 START TEST env_memory 00:04:09.179 ************************************ 00:04:09.179 15:37:04 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:09.179 00:04:09.179 00:04:09.179 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.179 http://cunit.sourceforge.net/ 00:04:09.179 00:04:09.179 00:04:09.179 Suite: memory 00:04:09.179 Test: alloc and free memory map ...[2024-12-09 15:37:04.256336] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:09.179 passed 00:04:09.179 Test: mem map translation ...[2024-12-09 15:37:04.274417] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:09.179 [2024-12-09 15:37:04.274432] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:09.179 [2024-12-09 15:37:04.274465] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:09.179 [2024-12-09 15:37:04.274472] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:09.179 passed 00:04:09.179 Test: mem map registration ...[2024-12-09 15:37:04.311178] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:09.179 [2024-12-09 15:37:04.311193] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:09.179 passed 00:04:09.179 Test: mem map adjacent registrations ...passed 00:04:09.179 00:04:09.179 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.179 suites 1 1 n/a 0 0 00:04:09.179 tests 4 4 4 0 0 00:04:09.179 asserts 152 152 152 0 n/a 00:04:09.179 00:04:09.180 Elapsed time = 0.131 seconds 00:04:09.180 00:04:09.180 real 0m0.141s 00:04:09.180 user 0m0.133s 00:04:09.180 sys 0m0.007s 00:04:09.180 15:37:04 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.180 15:37:04 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:09.180 ************************************ 00:04:09.180 END TEST env_memory 00:04:09.180 ************************************ 00:04:09.180 15:37:04 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:09.180 15:37:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.180 15:37:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.180 15:37:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.439 ************************************ 00:04:09.439 START TEST env_vtophys 00:04:09.439 ************************************ 00:04:09.439 15:37:04 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:09.439 EAL: lib.eal log level changed from notice to debug 00:04:09.439 EAL: Detected lcore 0 as core 0 on socket 0 00:04:09.439 EAL: Detected lcore 1 as core 1 on socket 0 00:04:09.439 EAL: Detected lcore 2 as core 2 on socket 0 00:04:09.439 EAL: Detected lcore 3 as core 3 on socket 0 00:04:09.439 EAL: Detected lcore 4 as core 4 on socket 0 00:04:09.439 EAL: Detected lcore 5 as core 5 on socket 0 00:04:09.439 EAL: Detected lcore 6 as core 6 on socket 0 00:04:09.439 EAL: Detected lcore 7 as core 8 on socket 0 00:04:09.439 EAL: Detected lcore 8 as core 9 on socket 0 00:04:09.439 EAL: Detected lcore 9 as core 10 on socket 0 00:04:09.439 EAL: Detected lcore 10 as core 11 on socket 0 00:04:09.439 EAL: Detected lcore 11 as core 12 on socket 0 00:04:09.439 EAL: Detected lcore 12 as core 13 on socket 0 00:04:09.439 EAL: Detected lcore 13 as core 16 on socket 0 00:04:09.439 EAL: Detected lcore 14 as core 17 on socket 0 00:04:09.439 EAL: Detected lcore 15 as core 18 on socket 0 00:04:09.439 EAL: Detected lcore 16 as core 19 on socket 0 00:04:09.439 EAL: Detected lcore 17 as core 20 on socket 0 00:04:09.439 EAL: Detected lcore 18 as core 21 on socket 0 00:04:09.439 EAL: Detected lcore 19 as core 25 on socket 0 00:04:09.439 EAL: Detected lcore 20 as core 26 on socket 0 00:04:09.439 EAL: Detected lcore 21 as core 27 on socket 0 00:04:09.439 EAL: Detected lcore 22 as core 28 on socket 0 00:04:09.439 EAL: Detected lcore 23 as core 29 on socket 0 00:04:09.439 EAL: Detected lcore 24 as core 0 on socket 1 00:04:09.440 EAL: Detected lcore 25 as core 1 on socket 1 00:04:09.440 EAL: Detected lcore 26 as core 2 on socket 1 00:04:09.440 EAL: Detected lcore 27 as core 3 on socket 1 00:04:09.440 EAL: Detected lcore 28 as core 4 on socket 1 00:04:09.440 EAL: Detected lcore 29 as core 5 on socket 1 00:04:09.440 EAL: Detected lcore 30 as core 6 on socket 1 00:04:09.440 EAL: Detected lcore 31 as core 8 on socket 1 00:04:09.440 EAL: Detected lcore 32 as core 9 on socket 1 00:04:09.440 EAL: Detected lcore 33 as core 10 on socket 1 00:04:09.440 EAL: Detected lcore 34 as core 11 on socket 1 00:04:09.440 EAL: Detected lcore 35 as core 12 on socket 1 00:04:09.440 EAL: Detected lcore 36 as core 13 on socket 1 00:04:09.440 EAL: Detected lcore 37 as core 16 on socket 1 00:04:09.440 EAL: Detected lcore 38 as core 17 on socket 1 00:04:09.440 EAL: Detected lcore 39 as core 18 on socket 1 00:04:09.440 EAL: Detected lcore 40 as core 19 on socket 1 00:04:09.440 EAL: Detected lcore 41 as core 20 on socket 1 00:04:09.440 EAL: Detected lcore 42 as core 21 on socket 1 00:04:09.440 EAL: Detected lcore 43 as core 25 on socket 1 00:04:09.440 EAL: Detected lcore 44 as core 26 on socket 1 00:04:09.440 EAL: Detected lcore 45 as core 27 on socket 1 00:04:09.440 EAL: Detected lcore 46 as core 28 on socket 1 00:04:09.440 EAL: Detected lcore 47 as core 29 on socket 1 00:04:09.440 EAL: Detected lcore 48 as core 0 on socket 0 00:04:09.440 EAL: Detected lcore 49 as core 1 on socket 0 00:04:09.440 EAL: Detected lcore 50 as core 2 on socket 0 00:04:09.440 EAL: Detected lcore 51 as core 3 on socket 0 00:04:09.440 EAL: Detected lcore 52 as core 4 on socket 0 00:04:09.440 EAL: Detected lcore 53 as core 5 on socket 0 00:04:09.440 EAL: Detected lcore 54 as core 6 on socket 0 00:04:09.440 EAL: Detected lcore 55 as core 8 on socket 0 00:04:09.440 EAL: Detected lcore 56 as core 9 on socket 0 00:04:09.440 EAL: Detected lcore 57 as core 10 on socket 0 00:04:09.440 EAL: Detected lcore 58 as core 11 on socket 0 00:04:09.440 EAL: Detected lcore 59 as core 12 on socket 0 00:04:09.440 EAL: Detected lcore 60 as core 13 on socket 0 00:04:09.440 EAL: Detected lcore 61 as core 16 on socket 0 00:04:09.440 EAL: Detected lcore 62 as core 17 on socket 0 00:04:09.440 EAL: Detected lcore 63 as core 18 on socket 0 00:04:09.440 EAL: Detected lcore 64 as core 19 on socket 0 00:04:09.440 EAL: Detected lcore 65 as core 20 on socket 0 00:04:09.440 EAL: Detected lcore 66 as core 21 on socket 0 00:04:09.440 EAL: Detected lcore 67 as core 25 on socket 0 00:04:09.440 EAL: Detected lcore 68 as core 26 on socket 0 00:04:09.440 EAL: Detected lcore 69 as core 27 on socket 0 00:04:09.440 EAL: Detected lcore 70 as core 28 on socket 0 00:04:09.440 EAL: Detected lcore 71 as core 29 on socket 0 00:04:09.440 EAL: Detected lcore 72 as core 0 on socket 1 00:04:09.440 EAL: Detected lcore 73 as core 1 on socket 1 00:04:09.440 EAL: Detected lcore 74 as core 2 on socket 1 00:04:09.440 EAL: Detected lcore 75 as core 3 on socket 1 00:04:09.440 EAL: Detected lcore 76 as core 4 on socket 1 00:04:09.440 EAL: Detected lcore 77 as core 5 on socket 1 00:04:09.440 EAL: Detected lcore 78 as core 6 on socket 1 00:04:09.440 EAL: Detected lcore 79 as core 8 on socket 1 00:04:09.440 EAL: Detected lcore 80 as core 9 on socket 1 00:04:09.440 EAL: Detected lcore 81 as core 10 on socket 1 00:04:09.440 EAL: Detected lcore 82 as core 11 on socket 1 00:04:09.440 EAL: Detected lcore 83 as core 12 on socket 1 00:04:09.440 EAL: Detected lcore 84 as core 13 on socket 1 00:04:09.440 EAL: Detected lcore 85 as core 16 on socket 1 00:04:09.440 EAL: Detected lcore 86 as core 17 on socket 1 00:04:09.440 EAL: Detected lcore 87 as core 18 on socket 1 00:04:09.440 EAL: Detected lcore 88 as core 19 on socket 1 00:04:09.440 EAL: Detected lcore 89 as core 20 on socket 1 00:04:09.440 EAL: Detected lcore 90 as core 21 on socket 1 00:04:09.440 EAL: Detected lcore 91 as core 25 on socket 1 00:04:09.440 EAL: Detected lcore 92 as core 26 on socket 1 00:04:09.440 EAL: Detected lcore 93 as core 27 on socket 1 00:04:09.440 EAL: Detected lcore 94 as core 28 on socket 1 00:04:09.440 EAL: Detected lcore 95 as core 29 on socket 1 00:04:09.440 EAL: Maximum logical cores by configuration: 128 00:04:09.440 EAL: Detected CPU lcores: 96 00:04:09.440 EAL: Detected NUMA nodes: 2 00:04:09.440 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:09.440 EAL: Detected shared linkage of DPDK 00:04:09.440 EAL: No shared files mode enabled, IPC will be disabled 00:04:09.440 EAL: Bus pci wants IOVA as 'DC' 00:04:09.440 EAL: Buses did not request a specific IOVA mode. 00:04:09.440 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:09.440 EAL: Selected IOVA mode 'VA' 00:04:09.440 EAL: Probing VFIO support... 00:04:09.440 EAL: IOMMU type 1 (Type 1) is supported 00:04:09.440 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:09.440 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:09.440 EAL: VFIO support initialized 00:04:09.440 EAL: Ask a virtual area of 0x2e000 bytes 00:04:09.440 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:09.440 EAL: Setting up physically contiguous memory... 00:04:09.440 EAL: Setting maximum number of open files to 524288 00:04:09.440 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:09.440 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:09.440 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:09.440 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.440 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:09.440 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.440 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.440 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:09.440 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:09.440 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.440 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:09.440 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.440 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.440 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:09.440 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:09.440 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.440 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:09.440 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.440 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.440 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:09.440 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:09.440 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.440 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:09.440 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.440 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.440 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:09.440 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:09.440 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:09.440 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.440 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:09.440 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:09.440 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.440 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:09.440 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:09.440 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.440 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:09.440 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:09.440 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.440 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:09.440 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:09.440 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.440 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:09.440 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:09.440 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.440 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:09.440 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:09.440 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.440 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:09.440 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:09.440 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.440 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:09.440 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:09.440 EAL: Hugepages will be freed exactly as allocated. 00:04:09.440 EAL: No shared files mode enabled, IPC is disabled 00:04:09.440 EAL: No shared files mode enabled, IPC is disabled 00:04:09.440 EAL: TSC frequency is ~2100000 KHz 00:04:09.440 EAL: Main lcore 0 is ready (tid=7f1c9aec7a00;cpuset=[0]) 00:04:09.440 EAL: Trying to obtain current memory policy. 00:04:09.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.440 EAL: Restoring previous memory policy: 0 00:04:09.440 EAL: request: mp_malloc_sync 00:04:09.440 EAL: No shared files mode enabled, IPC is disabled 00:04:09.440 EAL: Heap on socket 0 was expanded by 2MB 00:04:09.440 EAL: No shared files mode enabled, IPC is disabled 00:04:09.440 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:09.440 EAL: Mem event callback 'spdk:(nil)' registered 00:04:09.440 00:04:09.440 00:04:09.440 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.440 http://cunit.sourceforge.net/ 00:04:09.440 00:04:09.440 00:04:09.440 Suite: components_suite 00:04:09.440 Test: vtophys_malloc_test ...passed 00:04:09.440 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:09.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.440 EAL: Restoring previous memory policy: 4 00:04:09.440 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.440 EAL: request: mp_malloc_sync 00:04:09.440 EAL: No shared files mode enabled, IPC is disabled 00:04:09.440 EAL: Heap on socket 0 was expanded by 4MB 00:04:09.440 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.440 EAL: request: mp_malloc_sync 00:04:09.440 EAL: No shared files mode enabled, IPC is disabled 00:04:09.440 EAL: Heap on socket 0 was shrunk by 4MB 00:04:09.440 EAL: Trying to obtain current memory policy. 00:04:09.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.440 EAL: Restoring previous memory policy: 4 00:04:09.440 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.440 EAL: request: mp_malloc_sync 00:04:09.440 EAL: No shared files mode enabled, IPC is disabled 00:04:09.440 EAL: Heap on socket 0 was expanded by 6MB 00:04:09.440 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.440 EAL: request: mp_malloc_sync 00:04:09.440 EAL: No shared files mode enabled, IPC is disabled 00:04:09.440 EAL: Heap on socket 0 was shrunk by 6MB 00:04:09.440 EAL: Trying to obtain current memory policy. 00:04:09.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.441 EAL: Restoring previous memory policy: 4 00:04:09.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.441 EAL: request: mp_malloc_sync 00:04:09.441 EAL: No shared files mode enabled, IPC is disabled 00:04:09.441 EAL: Heap on socket 0 was expanded by 10MB 00:04:09.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.441 EAL: request: mp_malloc_sync 00:04:09.441 EAL: No shared files mode enabled, IPC is disabled 00:04:09.441 EAL: Heap on socket 0 was shrunk by 10MB 00:04:09.441 EAL: Trying to obtain current memory policy. 00:04:09.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.441 EAL: Restoring previous memory policy: 4 00:04:09.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.441 EAL: request: mp_malloc_sync 00:04:09.441 EAL: No shared files mode enabled, IPC is disabled 00:04:09.441 EAL: Heap on socket 0 was expanded by 18MB 00:04:09.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.441 EAL: request: mp_malloc_sync 00:04:09.441 EAL: No shared files mode enabled, IPC is disabled 00:04:09.441 EAL: Heap on socket 0 was shrunk by 18MB 00:04:09.441 EAL: Trying to obtain current memory policy. 00:04:09.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.441 EAL: Restoring previous memory policy: 4 00:04:09.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.441 EAL: request: mp_malloc_sync 00:04:09.441 EAL: No shared files mode enabled, IPC is disabled 00:04:09.441 EAL: Heap on socket 0 was expanded by 34MB 00:04:09.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.441 EAL: request: mp_malloc_sync 00:04:09.441 EAL: No shared files mode enabled, IPC is disabled 00:04:09.441 EAL: Heap on socket 0 was shrunk by 34MB 00:04:09.441 EAL: Trying to obtain current memory policy. 00:04:09.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.441 EAL: Restoring previous memory policy: 4 00:04:09.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.441 EAL: request: mp_malloc_sync 00:04:09.441 EAL: No shared files mode enabled, IPC is disabled 00:04:09.441 EAL: Heap on socket 0 was expanded by 66MB 00:04:09.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.441 EAL: request: mp_malloc_sync 00:04:09.441 EAL: No shared files mode enabled, IPC is disabled 00:04:09.441 EAL: Heap on socket 0 was shrunk by 66MB 00:04:09.441 EAL: Trying to obtain current memory policy. 00:04:09.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.441 EAL: Restoring previous memory policy: 4 00:04:09.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.441 EAL: request: mp_malloc_sync 00:04:09.441 EAL: No shared files mode enabled, IPC is disabled 00:04:09.441 EAL: Heap on socket 0 was expanded by 130MB 00:04:09.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.441 EAL: request: mp_malloc_sync 00:04:09.441 EAL: No shared files mode enabled, IPC is disabled 00:04:09.441 EAL: Heap on socket 0 was shrunk by 130MB 00:04:09.441 EAL: Trying to obtain current memory policy. 00:04:09.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.699 EAL: Restoring previous memory policy: 4 00:04:09.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.699 EAL: request: mp_malloc_sync 00:04:09.699 EAL: No shared files mode enabled, IPC is disabled 00:04:09.699 EAL: Heap on socket 0 was expanded by 258MB 00:04:09.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.699 EAL: request: mp_malloc_sync 00:04:09.699 EAL: No shared files mode enabled, IPC is disabled 00:04:09.699 EAL: Heap on socket 0 was shrunk by 258MB 00:04:09.699 EAL: Trying to obtain current memory policy. 00:04:09.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.699 EAL: Restoring previous memory policy: 4 00:04:09.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.699 EAL: request: mp_malloc_sync 00:04:09.700 EAL: No shared files mode enabled, IPC is disabled 00:04:09.700 EAL: Heap on socket 0 was expanded by 514MB 00:04:09.964 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.964 EAL: request: mp_malloc_sync 00:04:09.964 EAL: No shared files mode enabled, IPC is disabled 00:04:09.964 EAL: Heap on socket 0 was shrunk by 514MB 00:04:09.964 EAL: Trying to obtain current memory policy. 00:04:09.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.223 EAL: Restoring previous memory policy: 4 00:04:10.223 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.223 EAL: request: mp_malloc_sync 00:04:10.223 EAL: No shared files mode enabled, IPC is disabled 00:04:10.223 EAL: Heap on socket 0 was expanded by 1026MB 00:04:10.223 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.482 EAL: request: mp_malloc_sync 00:04:10.482 EAL: No shared files mode enabled, IPC is disabled 00:04:10.482 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:10.482 passed 00:04:10.482 00:04:10.482 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.482 suites 1 1 n/a 0 0 00:04:10.482 tests 2 2 2 0 0 00:04:10.482 asserts 497 497 497 0 n/a 00:04:10.482 00:04:10.482 Elapsed time = 0.966 seconds 00:04:10.482 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.482 EAL: request: mp_malloc_sync 00:04:10.482 EAL: No shared files mode enabled, IPC is disabled 00:04:10.482 EAL: Heap on socket 0 was shrunk by 2MB 00:04:10.482 EAL: No shared files mode enabled, IPC is disabled 00:04:10.482 EAL: No shared files mode enabled, IPC is disabled 00:04:10.482 EAL: No shared files mode enabled, IPC is disabled 00:04:10.482 00:04:10.482 real 0m1.093s 00:04:10.482 user 0m0.644s 00:04:10.482 sys 0m0.424s 00:04:10.482 15:37:05 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.482 15:37:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:10.482 ************************************ 00:04:10.482 END TEST env_vtophys 00:04:10.482 ************************************ 00:04:10.482 15:37:05 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:10.482 15:37:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.482 15:37:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.482 15:37:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.482 ************************************ 00:04:10.482 START TEST env_pci 00:04:10.482 ************************************ 00:04:10.482 15:37:05 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:10.482 00:04:10.482 00:04:10.482 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.482 http://cunit.sourceforge.net/ 00:04:10.482 00:04:10.482 00:04:10.482 Suite: pci 00:04:10.482 Test: pci_hook ...[2024-12-09 15:37:05.608143] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1802411 has claimed it 00:04:10.482 EAL: Cannot find device (10000:00:01.0) 00:04:10.482 EAL: Failed to attach device on primary process 00:04:10.482 passed 00:04:10.482 00:04:10.482 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.482 suites 1 1 n/a 0 0 00:04:10.482 tests 1 1 1 0 0 00:04:10.482 asserts 25 25 25 0 n/a 00:04:10.482 00:04:10.482 Elapsed time = 0.026 seconds 00:04:10.482 00:04:10.482 real 0m0.046s 00:04:10.482 user 0m0.015s 00:04:10.482 sys 0m0.030s 00:04:10.482 15:37:05 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.482 15:37:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:10.482 ************************************ 00:04:10.482 END TEST env_pci 00:04:10.482 ************************************ 00:04:10.482 15:37:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:10.482 15:37:05 env -- env/env.sh@15 -- # uname 00:04:10.482 15:37:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:10.482 15:37:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:10.482 15:37:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:10.482 15:37:05 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:10.482 15:37:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.482 15:37:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.741 ************************************ 00:04:10.741 START TEST env_dpdk_post_init 00:04:10.741 ************************************ 00:04:10.741 15:37:05 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:10.741 EAL: Detected CPU lcores: 96 00:04:10.741 EAL: Detected NUMA nodes: 2 00:04:10.741 EAL: Detected shared linkage of DPDK 00:04:10.741 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:10.741 EAL: Selected IOVA mode 'VA' 00:04:10.741 EAL: VFIO support initialized 00:04:10.741 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:10.741 EAL: Using IOMMU type 1 (Type 1) 00:04:10.741 EAL: Ignore mapping IO port bar(1) 00:04:10.741 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:10.741 EAL: Ignore mapping IO port bar(1) 00:04:10.741 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:10.741 EAL: Ignore mapping IO port bar(1) 00:04:10.741 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:10.741 EAL: Ignore mapping IO port bar(1) 00:04:10.741 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:10.741 EAL: Ignore mapping IO port bar(1) 00:04:10.741 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:10.741 EAL: Ignore mapping IO port bar(1) 00:04:10.741 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:10.741 EAL: Ignore mapping IO port bar(1) 00:04:10.741 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:10.741 EAL: Ignore mapping IO port bar(1) 00:04:10.741 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:11.678 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:11.678 EAL: Ignore mapping IO port bar(1) 00:04:11.678 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:11.678 EAL: Ignore mapping IO port bar(1) 00:04:11.678 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:11.678 EAL: Ignore mapping IO port bar(1) 00:04:11.678 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:11.678 EAL: Ignore mapping IO port bar(1) 00:04:11.678 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:11.678 EAL: Ignore mapping IO port bar(1) 00:04:11.678 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:11.678 EAL: Ignore mapping IO port bar(1) 00:04:11.678 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:11.678 EAL: Ignore mapping IO port bar(1) 00:04:11.678 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:11.678 EAL: Ignore mapping IO port bar(1) 00:04:11.678 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:14.962 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:14.962 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:14.962 Starting DPDK initialization... 00:04:14.962 Starting SPDK post initialization... 00:04:14.962 SPDK NVMe probe 00:04:14.962 Attaching to 0000:5e:00.0 00:04:14.962 Attached to 0000:5e:00.0 00:04:14.962 Cleaning up... 00:04:14.962 00:04:14.962 real 0m4.349s 00:04:14.962 user 0m2.965s 00:04:14.962 sys 0m0.450s 00:04:14.962 15:37:10 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.962 15:37:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.962 ************************************ 00:04:14.962 END TEST env_dpdk_post_init 00:04:14.962 ************************************ 00:04:14.962 15:37:10 env -- env/env.sh@26 -- # uname 00:04:14.962 15:37:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:14.962 15:37:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.962 15:37:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.963 15:37:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.963 15:37:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.963 ************************************ 00:04:14.963 START TEST env_mem_callbacks 00:04:14.963 ************************************ 00:04:14.963 15:37:10 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.963 EAL: Detected CPU lcores: 96 00:04:14.963 EAL: Detected NUMA nodes: 2 00:04:14.963 EAL: Detected shared linkage of DPDK 00:04:14.963 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.963 EAL: Selected IOVA mode 'VA' 00:04:14.963 EAL: VFIO support initialized 00:04:14.963 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:14.963 00:04:14.963 00:04:14.963 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.963 http://cunit.sourceforge.net/ 00:04:14.963 00:04:14.963 00:04:14.963 Suite: memory 00:04:14.963 Test: test ... 00:04:14.963 register 0x200000200000 2097152 00:04:14.963 malloc 3145728 00:04:14.963 register 0x200000400000 4194304 00:04:14.963 buf 0x200000500000 len 3145728 PASSED 00:04:14.963 malloc 64 00:04:14.963 buf 0x2000004fff40 len 64 PASSED 00:04:14.963 malloc 4194304 00:04:14.963 register 0x200000800000 6291456 00:04:14.963 buf 0x200000a00000 len 4194304 PASSED 00:04:14.963 free 0x200000500000 3145728 00:04:15.222 free 0x2000004fff40 64 00:04:15.222 unregister 0x200000400000 4194304 PASSED 00:04:15.222 free 0x200000a00000 4194304 00:04:15.222 unregister 0x200000800000 6291456 PASSED 00:04:15.222 malloc 8388608 00:04:15.222 register 0x200000400000 10485760 00:04:15.222 buf 0x200000600000 len 8388608 PASSED 00:04:15.222 free 0x200000600000 8388608 00:04:15.222 unregister 0x200000400000 10485760 PASSED 00:04:15.222 passed 00:04:15.222 00:04:15.222 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.222 suites 1 1 n/a 0 0 00:04:15.222 tests 1 1 1 0 0 00:04:15.222 asserts 15 15 15 0 n/a 00:04:15.222 00:04:15.222 Elapsed time = 0.008 seconds 00:04:15.222 00:04:15.222 real 0m0.058s 00:04:15.222 user 0m0.019s 00:04:15.222 sys 0m0.038s 00:04:15.222 15:37:10 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.222 15:37:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:15.222 ************************************ 00:04:15.222 END TEST env_mem_callbacks 00:04:15.222 ************************************ 00:04:15.222 00:04:15.222 real 0m6.222s 00:04:15.222 user 0m4.026s 00:04:15.222 sys 0m1.271s 00:04:15.222 15:37:10 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.222 15:37:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.222 ************************************ 00:04:15.222 END TEST env 00:04:15.222 ************************************ 00:04:15.222 15:37:10 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:15.222 15:37:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.222 15:37:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.222 15:37:10 -- common/autotest_common.sh@10 -- # set +x 00:04:15.222 ************************************ 00:04:15.222 START TEST rpc 00:04:15.222 ************************************ 00:04:15.222 15:37:10 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:15.222 * Looking for test storage... 00:04:15.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:15.222 15:37:10 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.222 15:37:10 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.222 15:37:10 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:15.481 15:37:10 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:15.481 15:37:10 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.481 15:37:10 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.481 15:37:10 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.481 15:37:10 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.481 15:37:10 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.481 15:37:10 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.481 15:37:10 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.481 15:37:10 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.481 15:37:10 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.481 15:37:10 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.481 15:37:10 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.481 15:37:10 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:15.481 15:37:10 rpc -- scripts/common.sh@345 -- # : 1 00:04:15.481 15:37:10 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.481 15:37:10 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.481 15:37:10 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:15.481 15:37:10 rpc -- scripts/common.sh@353 -- # local d=1 00:04:15.481 15:37:10 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.481 15:37:10 rpc -- scripts/common.sh@355 -- # echo 1 00:04:15.481 15:37:10 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.481 15:37:10 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:15.481 15:37:10 rpc -- scripts/common.sh@353 -- # local d=2 00:04:15.481 15:37:10 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.481 15:37:10 rpc -- scripts/common.sh@355 -- # echo 2 00:04:15.481 15:37:10 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.481 15:37:10 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.481 15:37:10 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.481 15:37:10 rpc -- scripts/common.sh@368 -- # return 0 00:04:15.481 15:37:10 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.481 15:37:10 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:15.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.481 --rc genhtml_branch_coverage=1 00:04:15.481 --rc genhtml_function_coverage=1 00:04:15.481 --rc genhtml_legend=1 00:04:15.481 --rc geninfo_all_blocks=1 00:04:15.481 --rc geninfo_unexecuted_blocks=1 00:04:15.481 00:04:15.481 ' 00:04:15.481 15:37:10 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:15.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.481 --rc genhtml_branch_coverage=1 00:04:15.481 --rc genhtml_function_coverage=1 00:04:15.481 --rc genhtml_legend=1 00:04:15.481 --rc geninfo_all_blocks=1 00:04:15.481 --rc geninfo_unexecuted_blocks=1 00:04:15.481 00:04:15.481 ' 00:04:15.481 15:37:10 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:15.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.481 --rc genhtml_branch_coverage=1 00:04:15.481 --rc genhtml_function_coverage=1 00:04:15.481 --rc genhtml_legend=1 00:04:15.481 --rc geninfo_all_blocks=1 00:04:15.481 --rc geninfo_unexecuted_blocks=1 00:04:15.481 00:04:15.481 ' 00:04:15.481 15:37:10 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:15.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.481 --rc genhtml_branch_coverage=1 00:04:15.481 --rc genhtml_function_coverage=1 00:04:15.481 --rc genhtml_legend=1 00:04:15.481 --rc geninfo_all_blocks=1 00:04:15.481 --rc geninfo_unexecuted_blocks=1 00:04:15.481 00:04:15.481 ' 00:04:15.481 15:37:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1803350 00:04:15.481 15:37:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.481 15:37:10 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:15.481 15:37:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1803350 00:04:15.481 15:37:10 rpc -- common/autotest_common.sh@835 -- # '[' -z 1803350 ']' 00:04:15.481 15:37:10 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.481 15:37:10 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.481 15:37:10 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.481 15:37:10 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.481 15:37:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.481 [2024-12-09 15:37:10.526007] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:04:15.481 [2024-12-09 15:37:10.526051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1803350 ] 00:04:15.481 [2024-12-09 15:37:10.602507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.481 [2024-12-09 15:37:10.639997] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:15.481 [2024-12-09 15:37:10.640037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1803350' to capture a snapshot of events at runtime. 00:04:15.481 [2024-12-09 15:37:10.640044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:15.481 [2024-12-09 15:37:10.640049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:15.481 [2024-12-09 15:37:10.640054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1803350 for offline analysis/debug. 00:04:15.481 [2024-12-09 15:37:10.640602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.740 15:37:10 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.740 15:37:10 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:15.740 15:37:10 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:15.740 15:37:10 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:15.740 15:37:10 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:15.740 15:37:10 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:15.740 15:37:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.740 15:37:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.740 15:37:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.740 ************************************ 00:04:15.740 START TEST rpc_integrity 00:04:15.740 ************************************ 00:04:15.740 15:37:10 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:15.740 15:37:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.740 15:37:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.740 15:37:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.740 15:37:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.740 15:37:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.740 15:37:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:15.740 15:37:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:15.999 15:37:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.999 15:37:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.999 15:37:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.999 15:37:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.999 15:37:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:15.999 15:37:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.999 15:37:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.999 15:37:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.999 15:37:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.999 15:37:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.999 { 00:04:15.999 "name": "Malloc0", 00:04:15.999 "aliases": [ 00:04:15.999 "67894c20-afe4-45ff-96a5-90df3bc2b304" 00:04:15.999 ], 00:04:15.999 "product_name": "Malloc disk", 00:04:15.999 "block_size": 512, 00:04:15.999 "num_blocks": 16384, 00:04:15.999 "uuid": "67894c20-afe4-45ff-96a5-90df3bc2b304", 00:04:15.999 "assigned_rate_limits": { 00:04:15.999 "rw_ios_per_sec": 0, 00:04:15.999 "rw_mbytes_per_sec": 0, 00:04:15.999 "r_mbytes_per_sec": 0, 00:04:15.999 "w_mbytes_per_sec": 0 00:04:15.999 }, 00:04:15.999 "claimed": false, 00:04:15.999 "zoned": false, 00:04:15.999 "supported_io_types": { 00:04:15.999 "read": true, 00:04:15.999 "write": true, 00:04:15.999 "unmap": true, 00:04:15.999 "flush": true, 00:04:15.999 "reset": true, 00:04:15.999 "nvme_admin": false, 00:04:15.999 "nvme_io": false, 00:04:15.999 "nvme_io_md": false, 00:04:15.999 "write_zeroes": true, 00:04:15.999 "zcopy": true, 00:04:15.999 "get_zone_info": false, 00:04:15.999 "zone_management": false, 00:04:15.999 "zone_append": false, 00:04:15.999 "compare": false, 00:04:15.999 "compare_and_write": false, 00:04:15.999 "abort": true, 00:04:15.999 "seek_hole": false, 00:04:15.999 "seek_data": false, 00:04:15.999 "copy": true, 00:04:15.999 "nvme_iov_md": false 00:04:15.999 }, 00:04:15.999 "memory_domains": [ 00:04:15.999 { 00:04:15.999 "dma_device_id": "system", 00:04:15.999 "dma_device_type": 1 00:04:15.999 }, 00:04:15.999 { 00:04:15.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.999 "dma_device_type": 2 00:04:15.999 } 00:04:15.999 ], 00:04:15.999 "driver_specific": {} 00:04:15.999 } 00:04:15.999 ]' 00:04:15.999 15:37:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.999 15:37:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.999 15:37:11 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:15.999 15:37:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.999 15:37:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.999 [2024-12-09 15:37:11.044465] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:15.999 [2024-12-09 15:37:11.044500] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.999 [2024-12-09 15:37:11.044512] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b16a40 00:04:15.999 [2024-12-09 15:37:11.044518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.999 [2024-12-09 15:37:11.045603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.999 [2024-12-09 15:37:11.045626] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.999 Passthru0 00:04:15.999 15:37:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.999 15:37:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.999 15:37:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.999 15:37:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.999 15:37:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.999 15:37:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.999 { 00:04:15.999 "name": "Malloc0", 00:04:15.999 "aliases": [ 00:04:15.999 "67894c20-afe4-45ff-96a5-90df3bc2b304" 00:04:15.999 ], 00:04:15.999 "product_name": "Malloc disk", 00:04:15.999 "block_size": 512, 00:04:15.999 "num_blocks": 16384, 00:04:15.999 "uuid": "67894c20-afe4-45ff-96a5-90df3bc2b304", 00:04:15.999 "assigned_rate_limits": { 00:04:15.999 "rw_ios_per_sec": 0, 00:04:15.999 "rw_mbytes_per_sec": 0, 00:04:15.999 "r_mbytes_per_sec": 0, 00:04:15.999 "w_mbytes_per_sec": 0 00:04:15.999 }, 00:04:15.999 "claimed": true, 00:04:15.999 "claim_type": "exclusive_write", 00:04:15.999 "zoned": false, 00:04:15.999 "supported_io_types": { 00:04:15.999 "read": true, 00:04:15.999 "write": true, 00:04:15.999 "unmap": true, 00:04:15.999 "flush": true, 00:04:15.999 "reset": true, 00:04:15.999 "nvme_admin": false, 00:04:15.999 "nvme_io": false, 00:04:15.999 "nvme_io_md": false, 00:04:15.999 "write_zeroes": true, 00:04:15.999 "zcopy": true, 00:04:15.999 "get_zone_info": false, 00:04:15.999 "zone_management": false, 00:04:15.999 "zone_append": false, 00:04:15.999 "compare": false, 00:04:15.999 "compare_and_write": false, 00:04:15.999 "abort": true, 00:04:15.999 "seek_hole": false, 00:04:15.999 "seek_data": false, 00:04:15.999 "copy": true, 00:04:15.999 "nvme_iov_md": false 00:04:15.999 }, 00:04:15.999 "memory_domains": [ 00:04:15.999 { 00:04:15.999 "dma_device_id": "system", 00:04:15.999 "dma_device_type": 1 00:04:15.999 }, 00:04:15.999 { 00:04:15.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.999 "dma_device_type": 2 00:04:15.999 } 00:04:15.999 ], 00:04:15.999 "driver_specific": {} 00:04:15.999 }, 00:04:15.999 { 00:04:15.999 "name": "Passthru0", 00:04:15.999 "aliases": [ 00:04:15.999 "4574cf18-5807-5d87-b7ea-efcaaf5058f4" 00:04:16.000 ], 00:04:16.000 "product_name": "passthru", 00:04:16.000 "block_size": 512, 00:04:16.000 "num_blocks": 16384, 00:04:16.000 "uuid": "4574cf18-5807-5d87-b7ea-efcaaf5058f4", 00:04:16.000 "assigned_rate_limits": { 00:04:16.000 "rw_ios_per_sec": 0, 00:04:16.000 "rw_mbytes_per_sec": 0, 00:04:16.000 "r_mbytes_per_sec": 0, 00:04:16.000 "w_mbytes_per_sec": 0 00:04:16.000 }, 00:04:16.000 "claimed": false, 00:04:16.000 "zoned": false, 00:04:16.000 "supported_io_types": { 00:04:16.000 "read": true, 00:04:16.000 "write": true, 00:04:16.000 "unmap": true, 00:04:16.000 "flush": true, 00:04:16.000 "reset": true, 00:04:16.000 "nvme_admin": false, 00:04:16.000 "nvme_io": false, 00:04:16.000 "nvme_io_md": false, 00:04:16.000 "write_zeroes": true, 00:04:16.000 "zcopy": true, 00:04:16.000 "get_zone_info": false, 00:04:16.000 "zone_management": false, 00:04:16.000 "zone_append": false, 00:04:16.000 "compare": false, 00:04:16.000 "compare_and_write": false, 00:04:16.000 "abort": true, 00:04:16.000 "seek_hole": false, 00:04:16.000 "seek_data": false, 00:04:16.000 "copy": true, 00:04:16.000 "nvme_iov_md": false 00:04:16.000 }, 00:04:16.000 "memory_domains": [ 00:04:16.000 { 00:04:16.000 "dma_device_id": "system", 00:04:16.000 "dma_device_type": 1 00:04:16.000 }, 00:04:16.000 { 00:04:16.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.000 "dma_device_type": 2 00:04:16.000 } 00:04:16.000 ], 00:04:16.000 "driver_specific": { 00:04:16.000 "passthru": { 00:04:16.000 "name": "Passthru0", 00:04:16.000 "base_bdev_name": "Malloc0" 00:04:16.000 } 00:04:16.000 } 00:04:16.000 } 00:04:16.000 ]' 00:04:16.000 15:37:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.000 15:37:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.000 15:37:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.000 15:37:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.000 15:37:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.000 15:37:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.000 15:37:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:16.000 15:37:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.000 15:37:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.000 15:37:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.000 15:37:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.000 15:37:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.000 15:37:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.000 15:37:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.000 15:37:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.000 15:37:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.000 15:37:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.000 00:04:16.000 real 0m0.273s 00:04:16.000 user 0m0.175s 00:04:16.000 sys 0m0.033s 00:04:16.000 15:37:11 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.000 15:37:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.000 ************************************ 00:04:16.000 END TEST rpc_integrity 00:04:16.000 ************************************ 00:04:16.000 15:37:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:16.000 15:37:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.000 15:37:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.000 15:37:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.258 ************************************ 00:04:16.258 START TEST rpc_plugins 00:04:16.258 ************************************ 00:04:16.258 15:37:11 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:16.258 15:37:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:16.258 15:37:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.258 15:37:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.258 15:37:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.258 15:37:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:16.258 15:37:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:16.258 15:37:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.258 15:37:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.258 15:37:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.258 15:37:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:16.258 { 00:04:16.258 "name": "Malloc1", 00:04:16.258 "aliases": [ 00:04:16.258 "ae0d87c6-5096-478c-a5f4-a508d2e4b710" 00:04:16.258 ], 00:04:16.258 "product_name": "Malloc disk", 00:04:16.258 "block_size": 4096, 00:04:16.258 "num_blocks": 256, 00:04:16.258 "uuid": "ae0d87c6-5096-478c-a5f4-a508d2e4b710", 00:04:16.258 "assigned_rate_limits": { 00:04:16.258 "rw_ios_per_sec": 0, 00:04:16.258 "rw_mbytes_per_sec": 0, 00:04:16.258 "r_mbytes_per_sec": 0, 00:04:16.258 "w_mbytes_per_sec": 0 00:04:16.258 }, 00:04:16.258 "claimed": false, 00:04:16.258 "zoned": false, 00:04:16.258 "supported_io_types": { 00:04:16.258 "read": true, 00:04:16.258 "write": true, 00:04:16.258 "unmap": true, 00:04:16.258 "flush": true, 00:04:16.258 "reset": true, 00:04:16.258 "nvme_admin": false, 00:04:16.258 "nvme_io": false, 00:04:16.258 "nvme_io_md": false, 00:04:16.258 "write_zeroes": true, 00:04:16.258 "zcopy": true, 00:04:16.258 "get_zone_info": false, 00:04:16.258 "zone_management": false, 00:04:16.258 "zone_append": false, 00:04:16.258 "compare": false, 00:04:16.258 "compare_and_write": false, 00:04:16.258 "abort": true, 00:04:16.258 "seek_hole": false, 00:04:16.258 "seek_data": false, 00:04:16.258 "copy": true, 00:04:16.258 "nvme_iov_md": false 00:04:16.258 }, 00:04:16.259 "memory_domains": [ 00:04:16.259 { 00:04:16.259 "dma_device_id": "system", 00:04:16.259 "dma_device_type": 1 00:04:16.259 }, 00:04:16.259 { 00:04:16.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.259 "dma_device_type": 2 00:04:16.259 } 00:04:16.259 ], 00:04:16.259 "driver_specific": {} 00:04:16.259 } 00:04:16.259 ]' 00:04:16.259 15:37:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:16.259 15:37:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:16.259 15:37:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:16.259 15:37:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.259 15:37:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.259 15:37:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.259 15:37:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:16.259 15:37:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.259 15:37:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.259 15:37:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.259 15:37:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:16.259 15:37:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:16.259 15:37:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:16.259 00:04:16.259 real 0m0.142s 00:04:16.259 user 0m0.083s 00:04:16.259 sys 0m0.023s 00:04:16.259 15:37:11 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.259 15:37:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.259 ************************************ 00:04:16.259 END TEST rpc_plugins 00:04:16.259 ************************************ 00:04:16.259 15:37:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:16.259 15:37:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.259 15:37:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.259 15:37:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.259 ************************************ 00:04:16.259 START TEST rpc_trace_cmd_test 00:04:16.259 ************************************ 00:04:16.259 15:37:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:16.259 15:37:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:16.259 15:37:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:16.259 15:37:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.259 15:37:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.259 15:37:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.259 15:37:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:16.259 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1803350", 00:04:16.259 "tpoint_group_mask": "0x8", 00:04:16.259 "iscsi_conn": { 00:04:16.259 "mask": "0x2", 00:04:16.259 "tpoint_mask": "0x0" 00:04:16.259 }, 00:04:16.259 "scsi": { 00:04:16.259 "mask": "0x4", 00:04:16.259 "tpoint_mask": "0x0" 00:04:16.259 }, 00:04:16.259 "bdev": { 00:04:16.259 "mask": "0x8", 00:04:16.259 "tpoint_mask": "0xffffffffffffffff" 00:04:16.259 }, 00:04:16.259 "nvmf_rdma": { 00:04:16.259 "mask": "0x10", 00:04:16.259 "tpoint_mask": "0x0" 00:04:16.259 }, 00:04:16.259 "nvmf_tcp": { 00:04:16.259 "mask": "0x20", 00:04:16.259 "tpoint_mask": "0x0" 00:04:16.259 }, 00:04:16.259 "ftl": { 00:04:16.259 "mask": "0x40", 00:04:16.259 "tpoint_mask": "0x0" 00:04:16.259 }, 00:04:16.259 "blobfs": { 00:04:16.259 "mask": "0x80", 00:04:16.259 "tpoint_mask": "0x0" 00:04:16.259 }, 00:04:16.259 "dsa": { 00:04:16.259 "mask": "0x200", 00:04:16.259 "tpoint_mask": "0x0" 00:04:16.259 }, 00:04:16.259 "thread": { 00:04:16.259 "mask": "0x400", 00:04:16.259 "tpoint_mask": "0x0" 00:04:16.259 }, 00:04:16.259 "nvme_pcie": { 00:04:16.259 "mask": "0x800", 00:04:16.259 "tpoint_mask": "0x0" 00:04:16.259 }, 00:04:16.259 "iaa": { 00:04:16.259 "mask": "0x1000", 00:04:16.259 "tpoint_mask": "0x0" 00:04:16.259 }, 00:04:16.259 "nvme_tcp": { 00:04:16.259 "mask": "0x2000", 00:04:16.259 "tpoint_mask": "0x0" 00:04:16.259 }, 00:04:16.259 "bdev_nvme": { 00:04:16.259 "mask": "0x4000", 00:04:16.259 "tpoint_mask": "0x0" 00:04:16.259 }, 00:04:16.259 "sock": { 00:04:16.259 "mask": "0x8000", 00:04:16.259 "tpoint_mask": "0x0" 00:04:16.259 }, 00:04:16.259 "blob": { 00:04:16.259 "mask": "0x10000", 00:04:16.259 "tpoint_mask": "0x0" 00:04:16.259 }, 00:04:16.259 "bdev_raid": { 00:04:16.259 "mask": "0x20000", 00:04:16.259 "tpoint_mask": "0x0" 00:04:16.259 }, 00:04:16.259 "scheduler": { 00:04:16.259 "mask": "0x40000", 00:04:16.259 "tpoint_mask": "0x0" 00:04:16.259 } 00:04:16.259 }' 00:04:16.259 15:37:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:16.518 15:37:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:16.518 15:37:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:16.518 15:37:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:16.518 15:37:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:16.518 15:37:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:16.518 15:37:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:16.518 15:37:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:16.518 15:37:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:16.518 15:37:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:16.518 00:04:16.518 real 0m0.210s 00:04:16.518 user 0m0.179s 00:04:16.518 sys 0m0.025s 00:04:16.518 15:37:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.518 15:37:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.518 ************************************ 00:04:16.518 END TEST rpc_trace_cmd_test 00:04:16.518 ************************************ 00:04:16.518 15:37:11 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:16.518 15:37:11 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:16.518 15:37:11 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:16.518 15:37:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.518 15:37:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.518 15:37:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.518 ************************************ 00:04:16.518 START TEST rpc_daemon_integrity 00:04:16.518 ************************************ 00:04:16.518 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:16.518 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.518 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.518 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.776 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.776 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.776 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.776 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.776 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.776 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.776 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.776 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.776 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:16.776 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.776 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.776 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.776 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.776 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.776 { 00:04:16.776 "name": "Malloc2", 00:04:16.776 "aliases": [ 00:04:16.776 "51bea622-4e25-43af-9144-393c13aabfe5" 00:04:16.776 ], 00:04:16.776 "product_name": "Malloc disk", 00:04:16.776 "block_size": 512, 00:04:16.776 "num_blocks": 16384, 00:04:16.777 "uuid": "51bea622-4e25-43af-9144-393c13aabfe5", 00:04:16.777 "assigned_rate_limits": { 00:04:16.777 "rw_ios_per_sec": 0, 00:04:16.777 "rw_mbytes_per_sec": 0, 00:04:16.777 "r_mbytes_per_sec": 0, 00:04:16.777 "w_mbytes_per_sec": 0 00:04:16.777 }, 00:04:16.777 "claimed": false, 00:04:16.777 "zoned": false, 00:04:16.777 "supported_io_types": { 00:04:16.777 "read": true, 00:04:16.777 "write": true, 00:04:16.777 "unmap": true, 00:04:16.777 "flush": true, 00:04:16.777 "reset": true, 00:04:16.777 "nvme_admin": false, 00:04:16.777 "nvme_io": false, 00:04:16.777 "nvme_io_md": false, 00:04:16.777 "write_zeroes": true, 00:04:16.777 "zcopy": true, 00:04:16.777 "get_zone_info": false, 00:04:16.777 "zone_management": false, 00:04:16.777 "zone_append": false, 00:04:16.777 "compare": false, 00:04:16.777 "compare_and_write": false, 00:04:16.777 "abort": true, 00:04:16.777 "seek_hole": false, 00:04:16.777 "seek_data": false, 00:04:16.777 "copy": true, 00:04:16.777 "nvme_iov_md": false 00:04:16.777 }, 00:04:16.777 "memory_domains": [ 00:04:16.777 { 00:04:16.777 "dma_device_id": "system", 00:04:16.777 "dma_device_type": 1 00:04:16.777 }, 00:04:16.777 { 00:04:16.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.777 "dma_device_type": 2 00:04:16.777 } 00:04:16.777 ], 00:04:16.777 "driver_specific": {} 00:04:16.777 } 00:04:16.777 ]' 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.777 [2024-12-09 15:37:11.866673] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:16.777 [2024-12-09 15:37:11.866701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.777 [2024-12-09 15:37:11.866712] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ae42e0 00:04:16.777 [2024-12-09 15:37:11.866718] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.777 [2024-12-09 15:37:11.867675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.777 [2024-12-09 15:37:11.867697] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.777 Passthru0 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.777 { 00:04:16.777 "name": "Malloc2", 00:04:16.777 "aliases": [ 00:04:16.777 "51bea622-4e25-43af-9144-393c13aabfe5" 00:04:16.777 ], 00:04:16.777 "product_name": "Malloc disk", 00:04:16.777 "block_size": 512, 00:04:16.777 "num_blocks": 16384, 00:04:16.777 "uuid": "51bea622-4e25-43af-9144-393c13aabfe5", 00:04:16.777 "assigned_rate_limits": { 00:04:16.777 "rw_ios_per_sec": 0, 00:04:16.777 "rw_mbytes_per_sec": 0, 00:04:16.777 "r_mbytes_per_sec": 0, 00:04:16.777 "w_mbytes_per_sec": 0 00:04:16.777 }, 00:04:16.777 "claimed": true, 00:04:16.777 "claim_type": "exclusive_write", 00:04:16.777 "zoned": false, 00:04:16.777 "supported_io_types": { 00:04:16.777 "read": true, 00:04:16.777 "write": true, 00:04:16.777 "unmap": true, 00:04:16.777 "flush": true, 00:04:16.777 "reset": true, 00:04:16.777 "nvme_admin": false, 00:04:16.777 "nvme_io": false, 00:04:16.777 "nvme_io_md": false, 00:04:16.777 "write_zeroes": true, 00:04:16.777 "zcopy": true, 00:04:16.777 "get_zone_info": false, 00:04:16.777 "zone_management": false, 00:04:16.777 "zone_append": false, 00:04:16.777 "compare": false, 00:04:16.777 "compare_and_write": false, 00:04:16.777 "abort": true, 00:04:16.777 "seek_hole": false, 00:04:16.777 "seek_data": false, 00:04:16.777 "copy": true, 00:04:16.777 "nvme_iov_md": false 00:04:16.777 }, 00:04:16.777 "memory_domains": [ 00:04:16.777 { 00:04:16.777 "dma_device_id": "system", 00:04:16.777 "dma_device_type": 1 00:04:16.777 }, 00:04:16.777 { 00:04:16.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.777 "dma_device_type": 2 00:04:16.777 } 00:04:16.777 ], 00:04:16.777 "driver_specific": {} 00:04:16.777 }, 00:04:16.777 { 00:04:16.777 "name": "Passthru0", 00:04:16.777 "aliases": [ 00:04:16.777 "2c16c81e-acc3-53fc-b893-7feb67ef9d60" 00:04:16.777 ], 00:04:16.777 "product_name": "passthru", 00:04:16.777 "block_size": 512, 00:04:16.777 "num_blocks": 16384, 00:04:16.777 "uuid": "2c16c81e-acc3-53fc-b893-7feb67ef9d60", 00:04:16.777 "assigned_rate_limits": { 00:04:16.777 "rw_ios_per_sec": 0, 00:04:16.777 "rw_mbytes_per_sec": 0, 00:04:16.777 "r_mbytes_per_sec": 0, 00:04:16.777 "w_mbytes_per_sec": 0 00:04:16.777 }, 00:04:16.777 "claimed": false, 00:04:16.777 "zoned": false, 00:04:16.777 "supported_io_types": { 00:04:16.777 "read": true, 00:04:16.777 "write": true, 00:04:16.777 "unmap": true, 00:04:16.777 "flush": true, 00:04:16.777 "reset": true, 00:04:16.777 "nvme_admin": false, 00:04:16.777 "nvme_io": false, 00:04:16.777 "nvme_io_md": false, 00:04:16.777 "write_zeroes": true, 00:04:16.777 "zcopy": true, 00:04:16.777 "get_zone_info": false, 00:04:16.777 "zone_management": false, 00:04:16.777 "zone_append": false, 00:04:16.777 "compare": false, 00:04:16.777 "compare_and_write": false, 00:04:16.777 "abort": true, 00:04:16.777 "seek_hole": false, 00:04:16.777 "seek_data": false, 00:04:16.777 "copy": true, 00:04:16.777 "nvme_iov_md": false 00:04:16.777 }, 00:04:16.777 "memory_domains": [ 00:04:16.777 { 00:04:16.777 "dma_device_id": "system", 00:04:16.777 "dma_device_type": 1 00:04:16.777 }, 00:04:16.777 { 00:04:16.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.777 "dma_device_type": 2 00:04:16.777 } 00:04:16.777 ], 00:04:16.777 "driver_specific": { 00:04:16.777 "passthru": { 00:04:16.777 "name": "Passthru0", 00:04:16.777 "base_bdev_name": "Malloc2" 00:04:16.777 } 00:04:16.777 } 00:04:16.777 } 00:04:16.777 ]' 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.777 15:37:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:17.036 15:37:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:17.036 00:04:17.036 real 0m0.265s 00:04:17.036 user 0m0.166s 00:04:17.036 sys 0m0.034s 00:04:17.036 15:37:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.036 15:37:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.036 ************************************ 00:04:17.036 END TEST rpc_daemon_integrity 00:04:17.036 ************************************ 00:04:17.036 15:37:12 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:17.036 15:37:12 rpc -- rpc/rpc.sh@84 -- # killprocess 1803350 00:04:17.036 15:37:12 rpc -- common/autotest_common.sh@954 -- # '[' -z 1803350 ']' 00:04:17.036 15:37:12 rpc -- common/autotest_common.sh@958 -- # kill -0 1803350 00:04:17.036 15:37:12 rpc -- common/autotest_common.sh@959 -- # uname 00:04:17.036 15:37:12 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.036 15:37:12 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1803350 00:04:17.036 15:37:12 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.036 15:37:12 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.036 15:37:12 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1803350' 00:04:17.036 killing process with pid 1803350 00:04:17.036 15:37:12 rpc -- common/autotest_common.sh@973 -- # kill 1803350 00:04:17.036 15:37:12 rpc -- common/autotest_common.sh@978 -- # wait 1803350 00:04:17.295 00:04:17.295 real 0m2.083s 00:04:17.295 user 0m2.653s 00:04:17.295 sys 0m0.678s 00:04:17.295 15:37:12 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.295 15:37:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.295 ************************************ 00:04:17.295 END TEST rpc 00:04:17.295 ************************************ 00:04:17.295 15:37:12 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:17.295 15:37:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.295 15:37:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.295 15:37:12 -- common/autotest_common.sh@10 -- # set +x 00:04:17.295 ************************************ 00:04:17.295 START TEST skip_rpc 00:04:17.295 ************************************ 00:04:17.295 15:37:12 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:17.554 * Looking for test storage... 00:04:17.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:17.554 15:37:12 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:17.554 15:37:12 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:17.554 15:37:12 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:17.554 15:37:12 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.555 15:37:12 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:17.555 15:37:12 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.555 15:37:12 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:17.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.555 --rc genhtml_branch_coverage=1 00:04:17.555 --rc genhtml_function_coverage=1 00:04:17.555 --rc genhtml_legend=1 00:04:17.555 --rc geninfo_all_blocks=1 00:04:17.555 --rc geninfo_unexecuted_blocks=1 00:04:17.555 00:04:17.555 ' 00:04:17.555 15:37:12 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:17.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.555 --rc genhtml_branch_coverage=1 00:04:17.555 --rc genhtml_function_coverage=1 00:04:17.555 --rc genhtml_legend=1 00:04:17.555 --rc geninfo_all_blocks=1 00:04:17.555 --rc geninfo_unexecuted_blocks=1 00:04:17.555 00:04:17.555 ' 00:04:17.555 15:37:12 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:17.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.555 --rc genhtml_branch_coverage=1 00:04:17.555 --rc genhtml_function_coverage=1 00:04:17.555 --rc genhtml_legend=1 00:04:17.555 --rc geninfo_all_blocks=1 00:04:17.555 --rc geninfo_unexecuted_blocks=1 00:04:17.555 00:04:17.555 ' 00:04:17.555 15:37:12 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:17.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.555 --rc genhtml_branch_coverage=1 00:04:17.555 --rc genhtml_function_coverage=1 00:04:17.555 --rc genhtml_legend=1 00:04:17.555 --rc geninfo_all_blocks=1 00:04:17.555 --rc geninfo_unexecuted_blocks=1 00:04:17.555 00:04:17.555 ' 00:04:17.555 15:37:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:17.555 15:37:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:17.555 15:37:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:17.555 15:37:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.555 15:37:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.555 15:37:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.555 ************************************ 00:04:17.555 START TEST skip_rpc 00:04:17.555 ************************************ 00:04:17.555 15:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:17.555 15:37:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1803978 00:04:17.555 15:37:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:17.555 15:37:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.555 15:37:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:17.555 [2024-12-09 15:37:12.720369] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:04:17.555 [2024-12-09 15:37:12.720411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1803978 ] 00:04:17.814 [2024-12-09 15:37:12.794286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.814 [2024-12-09 15:37:12.833329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1803978 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1803978 ']' 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1803978 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1803978 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1803978' 00:04:23.083 killing process with pid 1803978 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1803978 00:04:23.083 15:37:17 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1803978 00:04:23.083 00:04:23.083 real 0m5.362s 00:04:23.083 user 0m5.124s 00:04:23.083 sys 0m0.279s 00:04:23.083 15:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.083 15:37:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.083 ************************************ 00:04:23.083 END TEST skip_rpc 00:04:23.083 ************************************ 00:04:23.083 15:37:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:23.083 15:37:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.083 15:37:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.083 15:37:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.083 ************************************ 00:04:23.083 START TEST skip_rpc_with_json 00:04:23.083 ************************************ 00:04:23.083 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:23.083 15:37:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:23.083 15:37:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1804918 00:04:23.083 15:37:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.083 15:37:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:23.083 15:37:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1804918 00:04:23.083 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1804918 ']' 00:04:23.083 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.083 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.083 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.083 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.083 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.083 [2024-12-09 15:37:18.150414] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:04:23.083 [2024-12-09 15:37:18.150455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1804918 ] 00:04:23.083 [2024-12-09 15:37:18.223879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.083 [2024-12-09 15:37:18.264201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.343 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.343 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:23.343 15:37:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:23.343 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.343 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.343 [2024-12-09 15:37:18.486733] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:23.343 request: 00:04:23.343 { 00:04:23.343 "trtype": "tcp", 00:04:23.343 "method": "nvmf_get_transports", 00:04:23.343 "req_id": 1 00:04:23.343 } 00:04:23.343 Got JSON-RPC error response 00:04:23.343 response: 00:04:23.343 { 00:04:23.343 "code": -19, 00:04:23.343 "message": "No such device" 00:04:23.343 } 00:04:23.343 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:23.343 15:37:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:23.343 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.343 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.343 [2024-12-09 15:37:18.498835] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:23.343 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.343 15:37:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:23.343 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.343 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.602 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.603 15:37:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:23.603 { 00:04:23.603 "subsystems": [ 00:04:23.603 { 00:04:23.603 "subsystem": "fsdev", 00:04:23.603 "config": [ 00:04:23.603 { 00:04:23.603 "method": "fsdev_set_opts", 00:04:23.603 "params": { 00:04:23.603 "fsdev_io_pool_size": 65535, 00:04:23.603 "fsdev_io_cache_size": 256 00:04:23.603 } 00:04:23.603 } 00:04:23.603 ] 00:04:23.603 }, 00:04:23.603 { 00:04:23.603 "subsystem": "vfio_user_target", 00:04:23.603 "config": null 00:04:23.603 }, 00:04:23.603 { 00:04:23.603 "subsystem": "keyring", 00:04:23.603 "config": [] 00:04:23.603 }, 00:04:23.603 { 00:04:23.603 "subsystem": "iobuf", 00:04:23.603 "config": [ 00:04:23.603 { 00:04:23.603 "method": "iobuf_set_options", 00:04:23.603 "params": { 00:04:23.603 "small_pool_count": 8192, 00:04:23.603 "large_pool_count": 1024, 00:04:23.603 "small_bufsize": 8192, 00:04:23.603 "large_bufsize": 135168, 00:04:23.603 "enable_numa": false 00:04:23.603 } 00:04:23.603 } 00:04:23.603 ] 00:04:23.603 }, 00:04:23.603 { 00:04:23.603 "subsystem": "sock", 00:04:23.603 "config": [ 00:04:23.603 { 00:04:23.603 "method": "sock_set_default_impl", 00:04:23.603 "params": { 00:04:23.603 "impl_name": "posix" 00:04:23.603 } 00:04:23.603 }, 00:04:23.603 { 00:04:23.603 "method": "sock_impl_set_options", 00:04:23.603 "params": { 00:04:23.603 "impl_name": "ssl", 00:04:23.603 "recv_buf_size": 4096, 00:04:23.603 "send_buf_size": 4096, 00:04:23.603 "enable_recv_pipe": true, 00:04:23.603 "enable_quickack": false, 00:04:23.603 "enable_placement_id": 0, 00:04:23.603 "enable_zerocopy_send_server": true, 00:04:23.603 "enable_zerocopy_send_client": false, 00:04:23.603 "zerocopy_threshold": 0, 00:04:23.603 "tls_version": 0, 00:04:23.603 "enable_ktls": false 00:04:23.603 } 00:04:23.603 }, 00:04:23.603 { 00:04:23.603 "method": "sock_impl_set_options", 00:04:23.603 "params": { 00:04:23.603 "impl_name": "posix", 00:04:23.603 "recv_buf_size": 2097152, 00:04:23.603 "send_buf_size": 2097152, 00:04:23.603 "enable_recv_pipe": true, 00:04:23.603 "enable_quickack": false, 00:04:23.603 "enable_placement_id": 0, 00:04:23.603 "enable_zerocopy_send_server": true, 00:04:23.603 "enable_zerocopy_send_client": false, 00:04:23.603 "zerocopy_threshold": 0, 00:04:23.603 "tls_version": 0, 00:04:23.603 "enable_ktls": false 00:04:23.603 } 00:04:23.603 } 00:04:23.603 ] 00:04:23.603 }, 00:04:23.603 { 00:04:23.603 "subsystem": "vmd", 00:04:23.603 "config": [] 00:04:23.603 }, 00:04:23.603 { 00:04:23.603 "subsystem": "accel", 00:04:23.603 "config": [ 00:04:23.603 { 00:04:23.603 "method": "accel_set_options", 00:04:23.603 "params": { 00:04:23.603 "small_cache_size": 128, 00:04:23.603 "large_cache_size": 16, 00:04:23.603 "task_count": 2048, 00:04:23.603 "sequence_count": 2048, 00:04:23.603 "buf_count": 2048 00:04:23.603 } 00:04:23.603 } 00:04:23.603 ] 00:04:23.603 }, 00:04:23.603 { 00:04:23.603 "subsystem": "bdev", 00:04:23.603 "config": [ 00:04:23.603 { 00:04:23.603 "method": "bdev_set_options", 00:04:23.603 "params": { 00:04:23.603 "bdev_io_pool_size": 65535, 00:04:23.603 "bdev_io_cache_size": 256, 00:04:23.603 "bdev_auto_examine": true, 00:04:23.603 "iobuf_small_cache_size": 128, 00:04:23.603 "iobuf_large_cache_size": 16 00:04:23.603 } 00:04:23.603 }, 00:04:23.603 { 00:04:23.603 "method": "bdev_raid_set_options", 00:04:23.603 "params": { 00:04:23.603 "process_window_size_kb": 1024, 00:04:23.603 "process_max_bandwidth_mb_sec": 0 00:04:23.603 } 00:04:23.603 }, 00:04:23.603 { 00:04:23.603 "method": "bdev_iscsi_set_options", 00:04:23.603 "params": { 00:04:23.603 "timeout_sec": 30 00:04:23.603 } 00:04:23.603 }, 00:04:23.603 { 00:04:23.603 "method": "bdev_nvme_set_options", 00:04:23.603 "params": { 00:04:23.603 "action_on_timeout": "none", 00:04:23.603 "timeout_us": 0, 00:04:23.603 "timeout_admin_us": 0, 00:04:23.603 "keep_alive_timeout_ms": 10000, 00:04:23.603 "arbitration_burst": 0, 00:04:23.603 "low_priority_weight": 0, 00:04:23.603 "medium_priority_weight": 0, 00:04:23.603 "high_priority_weight": 0, 00:04:23.603 "nvme_adminq_poll_period_us": 10000, 00:04:23.603 "nvme_ioq_poll_period_us": 0, 00:04:23.604 "io_queue_requests": 0, 00:04:23.604 "delay_cmd_submit": true, 00:04:23.604 "transport_retry_count": 4, 00:04:23.604 "bdev_retry_count": 3, 00:04:23.604 "transport_ack_timeout": 0, 00:04:23.604 "ctrlr_loss_timeout_sec": 0, 00:04:23.604 "reconnect_delay_sec": 0, 00:04:23.604 "fast_io_fail_timeout_sec": 0, 00:04:23.604 "disable_auto_failback": false, 00:04:23.604 "generate_uuids": false, 00:04:23.604 "transport_tos": 0, 00:04:23.604 "nvme_error_stat": false, 00:04:23.604 "rdma_srq_size": 0, 00:04:23.604 "io_path_stat": false, 00:04:23.604 "allow_accel_sequence": false, 00:04:23.604 "rdma_max_cq_size": 0, 00:04:23.604 "rdma_cm_event_timeout_ms": 0, 00:04:23.604 "dhchap_digests": [ 00:04:23.604 "sha256", 00:04:23.604 "sha384", 00:04:23.604 "sha512" 00:04:23.604 ], 00:04:23.604 "dhchap_dhgroups": [ 00:04:23.604 "null", 00:04:23.604 "ffdhe2048", 00:04:23.604 "ffdhe3072", 00:04:23.604 "ffdhe4096", 00:04:23.604 "ffdhe6144", 00:04:23.604 "ffdhe8192" 00:04:23.604 ] 00:04:23.604 } 00:04:23.604 }, 00:04:23.604 { 00:04:23.604 "method": "bdev_nvme_set_hotplug", 00:04:23.604 "params": { 00:04:23.604 "period_us": 100000, 00:04:23.604 "enable": false 00:04:23.604 } 00:04:23.604 }, 00:04:23.604 { 00:04:23.604 "method": "bdev_wait_for_examine" 00:04:23.604 } 00:04:23.604 ] 00:04:23.604 }, 00:04:23.604 { 00:04:23.604 "subsystem": "scsi", 00:04:23.604 "config": null 00:04:23.604 }, 00:04:23.604 { 00:04:23.604 "subsystem": "scheduler", 00:04:23.604 "config": [ 00:04:23.604 { 00:04:23.604 "method": "framework_set_scheduler", 00:04:23.604 "params": { 00:04:23.604 "name": "static" 00:04:23.604 } 00:04:23.604 } 00:04:23.604 ] 00:04:23.604 }, 00:04:23.604 { 00:04:23.604 "subsystem": "vhost_scsi", 00:04:23.604 "config": [] 00:04:23.604 }, 00:04:23.604 { 00:04:23.604 "subsystem": "vhost_blk", 00:04:23.604 "config": [] 00:04:23.604 }, 00:04:23.604 { 00:04:23.604 "subsystem": "ublk", 00:04:23.604 "config": [] 00:04:23.604 }, 00:04:23.604 { 00:04:23.604 "subsystem": "nbd", 00:04:23.604 "config": [] 00:04:23.604 }, 00:04:23.604 { 00:04:23.604 "subsystem": "nvmf", 00:04:23.604 "config": [ 00:04:23.604 { 00:04:23.604 "method": "nvmf_set_config", 00:04:23.604 "params": { 00:04:23.604 "discovery_filter": "match_any", 00:04:23.604 "admin_cmd_passthru": { 00:04:23.604 "identify_ctrlr": false 00:04:23.604 }, 00:04:23.604 "dhchap_digests": [ 00:04:23.604 "sha256", 00:04:23.604 "sha384", 00:04:23.604 "sha512" 00:04:23.604 ], 00:04:23.604 "dhchap_dhgroups": [ 00:04:23.604 "null", 00:04:23.604 "ffdhe2048", 00:04:23.604 "ffdhe3072", 00:04:23.604 "ffdhe4096", 00:04:23.604 "ffdhe6144", 00:04:23.604 "ffdhe8192" 00:04:23.604 ] 00:04:23.604 } 00:04:23.604 }, 00:04:23.604 { 00:04:23.604 "method": "nvmf_set_max_subsystems", 00:04:23.604 "params": { 00:04:23.604 "max_subsystems": 1024 00:04:23.604 } 00:04:23.604 }, 00:04:23.604 { 00:04:23.604 "method": "nvmf_set_crdt", 00:04:23.604 "params": { 00:04:23.604 "crdt1": 0, 00:04:23.604 "crdt2": 0, 00:04:23.604 "crdt3": 0 00:04:23.604 } 00:04:23.604 }, 00:04:23.604 { 00:04:23.604 "method": "nvmf_create_transport", 00:04:23.604 "params": { 00:04:23.604 "trtype": "TCP", 00:04:23.604 "max_queue_depth": 128, 00:04:23.604 "max_io_qpairs_per_ctrlr": 127, 00:04:23.604 "in_capsule_data_size": 4096, 00:04:23.604 "max_io_size": 131072, 00:04:23.604 "io_unit_size": 131072, 00:04:23.604 "max_aq_depth": 128, 00:04:23.604 "num_shared_buffers": 511, 00:04:23.604 "buf_cache_size": 4294967295, 00:04:23.604 "dif_insert_or_strip": false, 00:04:23.604 "zcopy": false, 00:04:23.604 "c2h_success": true, 00:04:23.604 "sock_priority": 0, 00:04:23.604 "abort_timeout_sec": 1, 00:04:23.604 "ack_timeout": 0, 00:04:23.604 "data_wr_pool_size": 0 00:04:23.604 } 00:04:23.604 } 00:04:23.604 ] 00:04:23.604 }, 00:04:23.604 { 00:04:23.604 "subsystem": "iscsi", 00:04:23.605 "config": [ 00:04:23.605 { 00:04:23.605 "method": "iscsi_set_options", 00:04:23.605 "params": { 00:04:23.605 "node_base": "iqn.2016-06.io.spdk", 00:04:23.605 "max_sessions": 128, 00:04:23.605 "max_connections_per_session": 2, 00:04:23.605 "max_queue_depth": 64, 00:04:23.605 "default_time2wait": 2, 00:04:23.605 "default_time2retain": 20, 00:04:23.605 "first_burst_length": 8192, 00:04:23.605 "immediate_data": true, 00:04:23.605 "allow_duplicated_isid": false, 00:04:23.605 "error_recovery_level": 0, 00:04:23.605 "nop_timeout": 60, 00:04:23.605 "nop_in_interval": 30, 00:04:23.605 "disable_chap": false, 00:04:23.605 "require_chap": false, 00:04:23.605 "mutual_chap": false, 00:04:23.605 "chap_group": 0, 00:04:23.605 "max_large_datain_per_connection": 64, 00:04:23.605 "max_r2t_per_connection": 4, 00:04:23.605 "pdu_pool_size": 36864, 00:04:23.605 "immediate_data_pool_size": 16384, 00:04:23.605 "data_out_pool_size": 2048 00:04:23.605 } 00:04:23.605 } 00:04:23.605 ] 00:04:23.605 } 00:04:23.605 ] 00:04:23.605 } 00:04:23.605 15:37:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:23.605 15:37:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1804918 00:04:23.605 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1804918 ']' 00:04:23.605 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1804918 00:04:23.605 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:23.605 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.605 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1804918 00:04:23.605 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.605 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.605 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1804918' 00:04:23.605 killing process with pid 1804918 00:04:23.605 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1804918 00:04:23.605 15:37:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1804918 00:04:23.864 15:37:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1804935 00:04:23.864 15:37:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:23.864 15:37:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:29.155 15:37:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1804935 00:04:29.156 15:37:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1804935 ']' 00:04:29.156 15:37:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1804935 00:04:29.156 15:37:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:29.156 15:37:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.156 15:37:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1804935 00:04:29.156 15:37:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.156 15:37:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.156 15:37:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1804935' 00:04:29.156 killing process with pid 1804935 00:04:29.156 15:37:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1804935 00:04:29.156 15:37:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1804935 00:04:29.156 15:37:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:29.156 15:37:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:29.156 00:04:29.156 real 0m6.280s 00:04:29.156 user 0m5.983s 00:04:29.156 sys 0m0.594s 00:04:29.156 15:37:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.156 15:37:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.156 ************************************ 00:04:29.156 END TEST skip_rpc_with_json 00:04:29.156 ************************************ 00:04:29.415 15:37:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:29.415 15:37:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.415 15:37:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.415 15:37:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.415 ************************************ 00:04:29.415 START TEST skip_rpc_with_delay 00:04:29.415 ************************************ 00:04:29.415 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:29.415 15:37:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.415 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:29.415 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.415 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.415 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.415 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.415 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.415 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.415 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.415 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.416 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:29.416 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.416 [2024-12-09 15:37:24.501930] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:29.416 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:29.416 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.416 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:29.416 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.416 00:04:29.416 real 0m0.070s 00:04:29.416 user 0m0.047s 00:04:29.416 sys 0m0.023s 00:04:29.416 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.416 15:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:29.416 ************************************ 00:04:29.416 END TEST skip_rpc_with_delay 00:04:29.416 ************************************ 00:04:29.416 15:37:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:29.416 15:37:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:29.416 15:37:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:29.416 15:37:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.416 15:37:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.416 15:37:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.416 ************************************ 00:04:29.416 START TEST exit_on_failed_rpc_init 00:04:29.416 ************************************ 00:04:29.416 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:29.416 15:37:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1805942 00:04:29.416 15:37:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1805942 00:04:29.416 15:37:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.416 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1805942 ']' 00:04:29.416 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.416 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.416 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.416 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.416 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.676 [2024-12-09 15:37:24.650435] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:04:29.676 [2024-12-09 15:37:24.650482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1805942 ] 00:04:29.676 [2024-12-09 15:37:24.727016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.676 [2024-12-09 15:37:24.767641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.936 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.936 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:29.936 15:37:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.936 15:37:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.936 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:29.936 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.936 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.936 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.936 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.936 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.936 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.936 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.936 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.936 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:29.936 15:37:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.936 [2024-12-09 15:37:25.039262] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:04:29.936 [2024-12-09 15:37:25.039308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806117 ] 00:04:29.936 [2024-12-09 15:37:25.110676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.936 [2024-12-09 15:37:25.149733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.936 [2024-12-09 15:37:25.149787] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:29.936 [2024-12-09 15:37:25.149796] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:29.936 [2024-12-09 15:37:25.149802] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1805942 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1805942 ']' 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1805942 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1805942 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1805942' 00:04:30.195 killing process with pid 1805942 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1805942 00:04:30.195 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1805942 00:04:30.454 00:04:30.454 real 0m0.948s 00:04:30.454 user 0m1.010s 00:04:30.454 sys 0m0.378s 00:04:30.454 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.454 15:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:30.454 ************************************ 00:04:30.454 END TEST exit_on_failed_rpc_init 00:04:30.454 ************************************ 00:04:30.454 15:37:25 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:30.454 00:04:30.454 real 0m13.120s 00:04:30.454 user 0m12.378s 00:04:30.454 sys 0m1.549s 00:04:30.454 15:37:25 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.454 15:37:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.454 ************************************ 00:04:30.454 END TEST skip_rpc 00:04:30.454 ************************************ 00:04:30.454 15:37:25 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:30.454 15:37:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.454 15:37:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.454 15:37:25 -- common/autotest_common.sh@10 -- # set +x 00:04:30.454 ************************************ 00:04:30.454 START TEST rpc_client 00:04:30.454 ************************************ 00:04:30.454 15:37:25 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:30.715 * Looking for test storage... 00:04:30.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:30.715 15:37:25 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:30.715 15:37:25 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:30.715 15:37:25 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:30.715 15:37:25 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.715 15:37:25 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:30.715 15:37:25 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.715 15:37:25 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:30.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.715 --rc genhtml_branch_coverage=1 00:04:30.715 --rc genhtml_function_coverage=1 00:04:30.715 --rc genhtml_legend=1 00:04:30.715 --rc geninfo_all_blocks=1 00:04:30.715 --rc geninfo_unexecuted_blocks=1 00:04:30.715 00:04:30.715 ' 00:04:30.715 15:37:25 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:30.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.715 --rc genhtml_branch_coverage=1 00:04:30.715 --rc genhtml_function_coverage=1 00:04:30.715 --rc genhtml_legend=1 00:04:30.715 --rc geninfo_all_blocks=1 00:04:30.715 --rc geninfo_unexecuted_blocks=1 00:04:30.715 00:04:30.715 ' 00:04:30.715 15:37:25 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:30.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.715 --rc genhtml_branch_coverage=1 00:04:30.715 --rc genhtml_function_coverage=1 00:04:30.715 --rc genhtml_legend=1 00:04:30.715 --rc geninfo_all_blocks=1 00:04:30.715 --rc geninfo_unexecuted_blocks=1 00:04:30.715 00:04:30.715 ' 00:04:30.715 15:37:25 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:30.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.715 --rc genhtml_branch_coverage=1 00:04:30.715 --rc genhtml_function_coverage=1 00:04:30.715 --rc genhtml_legend=1 00:04:30.715 --rc geninfo_all_blocks=1 00:04:30.715 --rc geninfo_unexecuted_blocks=1 00:04:30.715 00:04:30.715 ' 00:04:30.715 15:37:25 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:30.715 OK 00:04:30.715 15:37:25 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:30.715 00:04:30.715 real 0m0.196s 00:04:30.715 user 0m0.118s 00:04:30.715 sys 0m0.091s 00:04:30.715 15:37:25 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.715 15:37:25 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:30.715 ************************************ 00:04:30.715 END TEST rpc_client 00:04:30.715 ************************************ 00:04:30.715 15:37:25 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:30.715 15:37:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.715 15:37:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.715 15:37:25 -- common/autotest_common.sh@10 -- # set +x 00:04:30.715 ************************************ 00:04:30.715 START TEST json_config 00:04:30.715 ************************************ 00:04:30.715 15:37:25 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:30.975 15:37:25 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:30.975 15:37:25 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:30.975 15:37:25 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:30.975 15:37:26 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:30.975 15:37:26 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.975 15:37:26 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.975 15:37:26 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.975 15:37:26 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.975 15:37:26 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.975 15:37:26 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.975 15:37:26 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.975 15:37:26 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.975 15:37:26 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.975 15:37:26 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.975 15:37:26 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.975 15:37:26 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:30.975 15:37:26 json_config -- scripts/common.sh@345 -- # : 1 00:04:30.975 15:37:26 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.975 15:37:26 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.975 15:37:26 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:30.975 15:37:26 json_config -- scripts/common.sh@353 -- # local d=1 00:04:30.975 15:37:26 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.975 15:37:26 json_config -- scripts/common.sh@355 -- # echo 1 00:04:30.975 15:37:26 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.975 15:37:26 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:30.975 15:37:26 json_config -- scripts/common.sh@353 -- # local d=2 00:04:30.975 15:37:26 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.975 15:37:26 json_config -- scripts/common.sh@355 -- # echo 2 00:04:30.975 15:37:26 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.975 15:37:26 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.975 15:37:26 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.975 15:37:26 json_config -- scripts/common.sh@368 -- # return 0 00:04:30.975 15:37:26 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.975 15:37:26 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:30.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.975 --rc genhtml_branch_coverage=1 00:04:30.975 --rc genhtml_function_coverage=1 00:04:30.975 --rc genhtml_legend=1 00:04:30.975 --rc geninfo_all_blocks=1 00:04:30.975 --rc geninfo_unexecuted_blocks=1 00:04:30.975 00:04:30.975 ' 00:04:30.975 15:37:26 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:30.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.975 --rc genhtml_branch_coverage=1 00:04:30.975 --rc genhtml_function_coverage=1 00:04:30.975 --rc genhtml_legend=1 00:04:30.975 --rc geninfo_all_blocks=1 00:04:30.975 --rc geninfo_unexecuted_blocks=1 00:04:30.975 00:04:30.975 ' 00:04:30.975 15:37:26 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:30.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.975 --rc genhtml_branch_coverage=1 00:04:30.975 --rc genhtml_function_coverage=1 00:04:30.975 --rc genhtml_legend=1 00:04:30.975 --rc geninfo_all_blocks=1 00:04:30.975 --rc geninfo_unexecuted_blocks=1 00:04:30.975 00:04:30.975 ' 00:04:30.975 15:37:26 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:30.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.975 --rc genhtml_branch_coverage=1 00:04:30.975 --rc genhtml_function_coverage=1 00:04:30.975 --rc genhtml_legend=1 00:04:30.975 --rc geninfo_all_blocks=1 00:04:30.975 --rc geninfo_unexecuted_blocks=1 00:04:30.975 00:04:30.975 ' 00:04:30.975 15:37:26 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:30.975 15:37:26 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:30.975 15:37:26 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.975 15:37:26 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.975 15:37:26 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.975 15:37:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.975 15:37:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.975 15:37:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.975 15:37:26 json_config -- paths/export.sh@5 -- # export PATH 00:04:30.975 15:37:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@51 -- # : 0 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:30.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:30.975 15:37:26 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:30.975 15:37:26 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:30.975 15:37:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:30.975 15:37:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:30.975 15:37:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:30.975 15:37:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:30.975 15:37:26 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:30.975 15:37:26 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:30.975 15:37:26 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:30.976 15:37:26 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:30.976 15:37:26 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:30.976 15:37:26 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:30.976 15:37:26 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:30.976 15:37:26 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:30.976 15:37:26 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:30.976 15:37:26 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:30.976 15:37:26 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:30.976 INFO: JSON configuration test init 00:04:30.976 15:37:26 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:30.976 15:37:26 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:30.976 15:37:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.976 15:37:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.976 15:37:26 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:30.976 15:37:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.976 15:37:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.976 15:37:26 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:30.976 15:37:26 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.976 15:37:26 json_config -- json_config/common.sh@10 -- # shift 00:04:30.976 15:37:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.976 15:37:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.976 15:37:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.976 15:37:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.976 15:37:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.976 15:37:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1806392 00:04:30.976 15:37:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.976 Waiting for target to run... 00:04:30.976 15:37:26 json_config -- json_config/common.sh@25 -- # waitforlisten 1806392 /var/tmp/spdk_tgt.sock 00:04:30.976 15:37:26 json_config -- common/autotest_common.sh@835 -- # '[' -z 1806392 ']' 00:04:30.976 15:37:26 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.976 15:37:26 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:30.976 15:37:26 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.976 15:37:26 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.976 15:37:26 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.976 15:37:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.976 [2024-12-09 15:37:26.164991] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:04:30.976 [2024-12-09 15:37:26.165044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806392 ] 00:04:31.234 [2024-12-09 15:37:26.456675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.493 [2024-12-09 15:37:26.490013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.062 15:37:26 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.062 15:37:26 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:32.062 15:37:26 json_config -- json_config/common.sh@26 -- # echo '' 00:04:32.062 00:04:32.062 15:37:26 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:32.063 15:37:26 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:32.063 15:37:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.063 15:37:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.063 15:37:26 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:32.063 15:37:26 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:32.063 15:37:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:32.063 15:37:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.063 15:37:27 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:32.063 15:37:27 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:32.063 15:37:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:35.352 15:37:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.352 15:37:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:35.352 15:37:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@54 -- # sort 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:35.352 15:37:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:35.352 15:37:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:35.352 15:37:30 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:35.353 15:37:30 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:35.353 15:37:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.353 15:37:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.353 15:37:30 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:35.353 15:37:30 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:35.353 15:37:30 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:35.353 15:37:30 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:35.353 15:37:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:35.353 MallocForNvmf0 00:04:35.353 15:37:30 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:35.353 15:37:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:35.611 MallocForNvmf1 00:04:35.611 15:37:30 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:35.611 15:37:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:35.917 [2024-12-09 15:37:30.923014] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:35.917 15:37:30 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:35.917 15:37:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:36.250 15:37:31 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:36.250 15:37:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:36.250 15:37:31 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:36.250 15:37:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:36.510 15:37:31 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:36.510 15:37:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:36.510 [2024-12-09 15:37:31.725460] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:36.768 15:37:31 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:36.768 15:37:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.769 15:37:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.769 15:37:31 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:36.769 15:37:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.769 15:37:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.769 15:37:31 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:36.769 15:37:31 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:36.769 15:37:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:36.769 MallocBdevForConfigChangeCheck 00:04:37.027 15:37:32 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:37.027 15:37:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:37.027 15:37:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.027 15:37:32 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:37.027 15:37:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:37.286 15:37:32 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:37.286 INFO: shutting down applications... 00:04:37.286 15:37:32 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:37.286 15:37:32 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:37.286 15:37:32 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:37.286 15:37:32 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:39.191 Calling clear_iscsi_subsystem 00:04:39.191 Calling clear_nvmf_subsystem 00:04:39.191 Calling clear_nbd_subsystem 00:04:39.191 Calling clear_ublk_subsystem 00:04:39.191 Calling clear_vhost_blk_subsystem 00:04:39.191 Calling clear_vhost_scsi_subsystem 00:04:39.191 Calling clear_bdev_subsystem 00:04:39.191 15:37:33 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:39.191 15:37:33 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:39.191 15:37:33 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:39.191 15:37:33 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.191 15:37:33 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:39.191 15:37:33 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:39.191 15:37:34 json_config -- json_config/json_config.sh@352 -- # break 00:04:39.191 15:37:34 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:39.191 15:37:34 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:39.191 15:37:34 json_config -- json_config/common.sh@31 -- # local app=target 00:04:39.191 15:37:34 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:39.191 15:37:34 json_config -- json_config/common.sh@35 -- # [[ -n 1806392 ]] 00:04:39.191 15:37:34 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1806392 00:04:39.191 15:37:34 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:39.191 15:37:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.191 15:37:34 json_config -- json_config/common.sh@41 -- # kill -0 1806392 00:04:39.191 15:37:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.759 15:37:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.759 15:37:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.759 15:37:34 json_config -- json_config/common.sh@41 -- # kill -0 1806392 00:04:39.759 15:37:34 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:39.759 15:37:34 json_config -- json_config/common.sh@43 -- # break 00:04:39.759 15:37:34 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:39.759 15:37:34 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:39.759 SPDK target shutdown done 00:04:39.759 15:37:34 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:39.759 INFO: relaunching applications... 00:04:39.759 15:37:34 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.759 15:37:34 json_config -- json_config/common.sh@9 -- # local app=target 00:04:39.759 15:37:34 json_config -- json_config/common.sh@10 -- # shift 00:04:39.759 15:37:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:39.759 15:37:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:39.759 15:37:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:39.759 15:37:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.759 15:37:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.759 15:37:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1807978 00:04:39.759 15:37:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:39.759 Waiting for target to run... 00:04:39.759 15:37:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.759 15:37:34 json_config -- json_config/common.sh@25 -- # waitforlisten 1807978 /var/tmp/spdk_tgt.sock 00:04:39.759 15:37:34 json_config -- common/autotest_common.sh@835 -- # '[' -z 1807978 ']' 00:04:39.759 15:37:34 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:39.759 15:37:34 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.759 15:37:34 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:39.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:39.759 15:37:34 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.759 15:37:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.759 [2024-12-09 15:37:34.922252] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:04:39.759 [2024-12-09 15:37:34.922310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1807978 ] 00:04:40.327 [2024-12-09 15:37:35.383805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.327 [2024-12-09 15:37:35.439799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.616 [2024-12-09 15:37:38.471021] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.616 [2024-12-09 15:37:38.503303] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:44.184 15:37:39 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.184 15:37:39 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:44.184 15:37:39 json_config -- json_config/common.sh@26 -- # echo '' 00:04:44.184 00:04:44.184 15:37:39 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:44.184 15:37:39 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:44.184 INFO: Checking if target configuration is the same... 00:04:44.184 15:37:39 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:44.184 15:37:39 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.184 15:37:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.184 + '[' 2 -ne 2 ']' 00:04:44.184 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:44.184 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:44.184 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:44.184 +++ basename /dev/fd/62 00:04:44.184 ++ mktemp /tmp/62.XXX 00:04:44.184 + tmp_file_1=/tmp/62.pyh 00:04:44.184 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.184 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:44.184 + tmp_file_2=/tmp/spdk_tgt_config.json.KET 00:04:44.184 + ret=0 00:04:44.184 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:44.444 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:44.444 + diff -u /tmp/62.pyh /tmp/spdk_tgt_config.json.KET 00:04:44.444 + echo 'INFO: JSON config files are the same' 00:04:44.444 INFO: JSON config files are the same 00:04:44.444 + rm /tmp/62.pyh /tmp/spdk_tgt_config.json.KET 00:04:44.444 + exit 0 00:04:44.444 15:37:39 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:44.444 15:37:39 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:44.444 INFO: changing configuration and checking if this can be detected... 00:04:44.444 15:37:39 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:44.444 15:37:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:44.703 15:37:39 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.703 15:37:39 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:44.703 15:37:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.703 + '[' 2 -ne 2 ']' 00:04:44.703 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:44.703 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:44.703 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:44.703 +++ basename /dev/fd/62 00:04:44.703 ++ mktemp /tmp/62.XXX 00:04:44.703 + tmp_file_1=/tmp/62.cro 00:04:44.703 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.703 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:44.703 + tmp_file_2=/tmp/spdk_tgt_config.json.rMX 00:04:44.703 + ret=0 00:04:44.703 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:44.962 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:44.962 + diff -u /tmp/62.cro /tmp/spdk_tgt_config.json.rMX 00:04:44.962 + ret=1 00:04:44.962 + echo '=== Start of file: /tmp/62.cro ===' 00:04:44.962 + cat /tmp/62.cro 00:04:44.962 + echo '=== End of file: /tmp/62.cro ===' 00:04:44.962 + echo '' 00:04:44.962 + echo '=== Start of file: /tmp/spdk_tgt_config.json.rMX ===' 00:04:44.962 + cat /tmp/spdk_tgt_config.json.rMX 00:04:44.962 + echo '=== End of file: /tmp/spdk_tgt_config.json.rMX ===' 00:04:44.962 + echo '' 00:04:44.962 + rm /tmp/62.cro /tmp/spdk_tgt_config.json.rMX 00:04:44.962 + exit 1 00:04:44.962 15:37:40 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:44.962 INFO: configuration change detected. 00:04:44.962 15:37:40 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:44.962 15:37:40 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:44.962 15:37:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.962 15:37:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.962 15:37:40 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:44.962 15:37:40 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:44.962 15:37:40 json_config -- json_config/json_config.sh@324 -- # [[ -n 1807978 ]] 00:04:44.962 15:37:40 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:44.962 15:37:40 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:44.962 15:37:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.962 15:37:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.962 15:37:40 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:44.962 15:37:40 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:44.962 15:37:40 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:44.962 15:37:40 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:44.962 15:37:40 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:44.962 15:37:40 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:44.962 15:37:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:44.962 15:37:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.221 15:37:40 json_config -- json_config/json_config.sh@330 -- # killprocess 1807978 00:04:45.221 15:37:40 json_config -- common/autotest_common.sh@954 -- # '[' -z 1807978 ']' 00:04:45.221 15:37:40 json_config -- common/autotest_common.sh@958 -- # kill -0 1807978 00:04:45.221 15:37:40 json_config -- common/autotest_common.sh@959 -- # uname 00:04:45.221 15:37:40 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.221 15:37:40 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1807978 00:04:45.221 15:37:40 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.221 15:37:40 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.221 15:37:40 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1807978' 00:04:45.221 killing process with pid 1807978 00:04:45.221 15:37:40 json_config -- common/autotest_common.sh@973 -- # kill 1807978 00:04:45.221 15:37:40 json_config -- common/autotest_common.sh@978 -- # wait 1807978 00:04:46.599 15:37:41 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.599 15:37:41 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:46.599 15:37:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.599 15:37:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.599 15:37:41 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:46.599 15:37:41 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:46.599 INFO: Success 00:04:46.599 00:04:46.599 real 0m15.865s 00:04:46.599 user 0m16.556s 00:04:46.599 sys 0m2.521s 00:04:46.599 15:37:41 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.599 15:37:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.599 ************************************ 00:04:46.599 END TEST json_config 00:04:46.599 ************************************ 00:04:46.599 15:37:41 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:46.599 15:37:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.599 15:37:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.599 15:37:41 -- common/autotest_common.sh@10 -- # set +x 00:04:46.859 ************************************ 00:04:46.859 START TEST json_config_extra_key 00:04:46.859 ************************************ 00:04:46.859 15:37:41 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:46.859 15:37:41 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:46.859 15:37:41 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:46.859 15:37:41 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:46.859 15:37:41 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.859 15:37:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:46.860 15:37:41 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.860 15:37:41 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:46.860 15:37:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:46.860 15:37:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.860 15:37:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:46.860 15:37:41 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.860 15:37:41 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.860 15:37:41 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.860 15:37:41 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:46.860 15:37:41 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.860 15:37:41 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:46.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.860 --rc genhtml_branch_coverage=1 00:04:46.860 --rc genhtml_function_coverage=1 00:04:46.860 --rc genhtml_legend=1 00:04:46.860 --rc geninfo_all_blocks=1 00:04:46.860 --rc geninfo_unexecuted_blocks=1 00:04:46.860 00:04:46.860 ' 00:04:46.860 15:37:41 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:46.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.860 --rc genhtml_branch_coverage=1 00:04:46.860 --rc genhtml_function_coverage=1 00:04:46.860 --rc genhtml_legend=1 00:04:46.860 --rc geninfo_all_blocks=1 00:04:46.860 --rc geninfo_unexecuted_blocks=1 00:04:46.860 00:04:46.860 ' 00:04:46.860 15:37:41 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:46.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.860 --rc genhtml_branch_coverage=1 00:04:46.860 --rc genhtml_function_coverage=1 00:04:46.860 --rc genhtml_legend=1 00:04:46.860 --rc geninfo_all_blocks=1 00:04:46.860 --rc geninfo_unexecuted_blocks=1 00:04:46.860 00:04:46.860 ' 00:04:46.860 15:37:41 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:46.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.860 --rc genhtml_branch_coverage=1 00:04:46.860 --rc genhtml_function_coverage=1 00:04:46.860 --rc genhtml_legend=1 00:04:46.860 --rc geninfo_all_blocks=1 00:04:46.860 --rc geninfo_unexecuted_blocks=1 00:04:46.860 00:04:46.860 ' 00:04:46.860 15:37:41 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:46.860 15:37:42 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:46.860 15:37:42 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:46.860 15:37:42 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.860 15:37:42 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.860 15:37:42 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.860 15:37:42 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.860 15:37:42 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.860 15:37:42 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:46.860 15:37:42 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:46.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:46.860 15:37:42 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:46.860 15:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:46.860 15:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:46.860 15:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:46.860 15:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:46.860 15:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:46.860 15:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:46.860 15:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:46.860 15:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:46.860 15:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:46.860 15:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:46.860 15:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:46.860 INFO: launching applications... 00:04:46.860 15:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:46.860 15:37:42 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:46.860 15:37:42 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:46.860 15:37:42 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:46.860 15:37:42 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:46.860 15:37:42 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:46.860 15:37:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.860 15:37:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.860 15:37:42 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1809242 00:04:46.860 15:37:42 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:46.860 Waiting for target to run... 00:04:46.860 15:37:42 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1809242 /var/tmp/spdk_tgt.sock 00:04:46.860 15:37:42 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1809242 ']' 00:04:46.860 15:37:42 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:46.860 15:37:42 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:46.860 15:37:42 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.860 15:37:42 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:46.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:46.860 15:37:42 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.860 15:37:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:47.120 [2024-12-09 15:37:42.090223] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:04:47.120 [2024-12-09 15:37:42.090272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1809242 ] 00:04:47.379 [2024-12-09 15:37:42.543821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.379 [2024-12-09 15:37:42.592936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.946 15:37:42 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.946 15:37:42 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:47.946 15:37:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:47.946 00:04:47.946 15:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:47.946 INFO: shutting down applications... 00:04:47.946 15:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:47.946 15:37:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:47.946 15:37:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:47.946 15:37:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1809242 ]] 00:04:47.946 15:37:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1809242 00:04:47.946 15:37:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:47.946 15:37:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.946 15:37:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1809242 00:04:47.946 15:37:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.206 15:37:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.206 15:37:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.206 15:37:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1809242 00:04:48.206 15:37:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:48.206 15:37:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:48.206 15:37:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:48.206 15:37:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:48.206 SPDK target shutdown done 00:04:48.206 15:37:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:48.206 Success 00:04:48.206 00:04:48.206 real 0m1.574s 00:04:48.206 user 0m1.167s 00:04:48.206 sys 0m0.585s 00:04:48.206 15:37:43 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.206 15:37:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:48.206 ************************************ 00:04:48.206 END TEST json_config_extra_key 00:04:48.206 ************************************ 00:04:48.465 15:37:43 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.465 15:37:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.465 15:37:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.465 15:37:43 -- common/autotest_common.sh@10 -- # set +x 00:04:48.465 ************************************ 00:04:48.465 START TEST alias_rpc 00:04:48.465 ************************************ 00:04:48.465 15:37:43 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.465 * Looking for test storage... 00:04:48.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:48.465 15:37:43 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:48.465 15:37:43 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:48.465 15:37:43 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:48.465 15:37:43 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.465 15:37:43 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:48.465 15:37:43 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.465 15:37:43 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:48.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.465 --rc genhtml_branch_coverage=1 00:04:48.465 --rc genhtml_function_coverage=1 00:04:48.465 --rc genhtml_legend=1 00:04:48.465 --rc geninfo_all_blocks=1 00:04:48.465 --rc geninfo_unexecuted_blocks=1 00:04:48.465 00:04:48.465 ' 00:04:48.465 15:37:43 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:48.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.466 --rc genhtml_branch_coverage=1 00:04:48.466 --rc genhtml_function_coverage=1 00:04:48.466 --rc genhtml_legend=1 00:04:48.466 --rc geninfo_all_blocks=1 00:04:48.466 --rc geninfo_unexecuted_blocks=1 00:04:48.466 00:04:48.466 ' 00:04:48.466 15:37:43 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:48.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.466 --rc genhtml_branch_coverage=1 00:04:48.466 --rc genhtml_function_coverage=1 00:04:48.466 --rc genhtml_legend=1 00:04:48.466 --rc geninfo_all_blocks=1 00:04:48.466 --rc geninfo_unexecuted_blocks=1 00:04:48.466 00:04:48.466 ' 00:04:48.466 15:37:43 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:48.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.466 --rc genhtml_branch_coverage=1 00:04:48.466 --rc genhtml_function_coverage=1 00:04:48.466 --rc genhtml_legend=1 00:04:48.466 --rc geninfo_all_blocks=1 00:04:48.466 --rc geninfo_unexecuted_blocks=1 00:04:48.466 00:04:48.466 ' 00:04:48.466 15:37:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:48.466 15:37:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1809576 00:04:48.466 15:37:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.466 15:37:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1809576 00:04:48.466 15:37:43 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1809576 ']' 00:04:48.466 15:37:43 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.466 15:37:43 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.466 15:37:43 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.466 15:37:43 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.466 15:37:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.725 [2024-12-09 15:37:43.719916] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:04:48.725 [2024-12-09 15:37:43.719965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1809576 ] 00:04:48.725 [2024-12-09 15:37:43.796004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.725 [2024-12-09 15:37:43.836659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.984 15:37:44 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.984 15:37:44 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.984 15:37:44 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:49.242 15:37:44 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1809576 00:04:49.242 15:37:44 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1809576 ']' 00:04:49.242 15:37:44 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1809576 00:04:49.243 15:37:44 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:49.243 15:37:44 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.243 15:37:44 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1809576 00:04:49.243 15:37:44 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.243 15:37:44 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.243 15:37:44 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1809576' 00:04:49.243 killing process with pid 1809576 00:04:49.243 15:37:44 alias_rpc -- common/autotest_common.sh@973 -- # kill 1809576 00:04:49.243 15:37:44 alias_rpc -- common/autotest_common.sh@978 -- # wait 1809576 00:04:49.502 00:04:49.502 real 0m1.130s 00:04:49.502 user 0m1.140s 00:04:49.502 sys 0m0.406s 00:04:49.502 15:37:44 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.502 15:37:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.502 ************************************ 00:04:49.502 END TEST alias_rpc 00:04:49.502 ************************************ 00:04:49.502 15:37:44 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:49.502 15:37:44 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:49.502 15:37:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.502 15:37:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.502 15:37:44 -- common/autotest_common.sh@10 -- # set +x 00:04:49.502 ************************************ 00:04:49.502 START TEST spdkcli_tcp 00:04:49.502 ************************************ 00:04:49.502 15:37:44 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:49.761 * Looking for test storage... 00:04:49.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:49.761 15:37:44 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:49.761 15:37:44 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:49.761 15:37:44 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:49.761 15:37:44 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.761 15:37:44 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:49.761 15:37:44 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.761 15:37:44 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:49.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.761 --rc genhtml_branch_coverage=1 00:04:49.761 --rc genhtml_function_coverage=1 00:04:49.761 --rc genhtml_legend=1 00:04:49.761 --rc geninfo_all_blocks=1 00:04:49.761 --rc geninfo_unexecuted_blocks=1 00:04:49.761 00:04:49.761 ' 00:04:49.761 15:37:44 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:49.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.761 --rc genhtml_branch_coverage=1 00:04:49.761 --rc genhtml_function_coverage=1 00:04:49.761 --rc genhtml_legend=1 00:04:49.761 --rc geninfo_all_blocks=1 00:04:49.761 --rc geninfo_unexecuted_blocks=1 00:04:49.761 00:04:49.761 ' 00:04:49.761 15:37:44 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:49.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.761 --rc genhtml_branch_coverage=1 00:04:49.761 --rc genhtml_function_coverage=1 00:04:49.761 --rc genhtml_legend=1 00:04:49.761 --rc geninfo_all_blocks=1 00:04:49.761 --rc geninfo_unexecuted_blocks=1 00:04:49.761 00:04:49.761 ' 00:04:49.761 15:37:44 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:49.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.761 --rc genhtml_branch_coverage=1 00:04:49.761 --rc genhtml_function_coverage=1 00:04:49.761 --rc genhtml_legend=1 00:04:49.761 --rc geninfo_all_blocks=1 00:04:49.761 --rc geninfo_unexecuted_blocks=1 00:04:49.761 00:04:49.761 ' 00:04:49.761 15:37:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:49.761 15:37:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:49.761 15:37:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:49.761 15:37:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:49.761 15:37:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:49.761 15:37:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:49.761 15:37:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:49.761 15:37:44 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:49.761 15:37:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.761 15:37:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1809819 00:04:49.761 15:37:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:49.761 15:37:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1809819 00:04:49.761 15:37:44 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1809819 ']' 00:04:49.761 15:37:44 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.761 15:37:44 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.761 15:37:44 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.761 15:37:44 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.761 15:37:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.761 [2024-12-09 15:37:44.926459] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:04:49.761 [2024-12-09 15:37:44.926507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1809819 ] 00:04:50.020 [2024-12-09 15:37:44.994943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.020 [2024-12-09 15:37:45.034302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.020 [2024-12-09 15:37:45.034302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.279 15:37:45 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.279 15:37:45 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:50.279 15:37:45 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1810006 00:04:50.279 15:37:45 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:50.279 15:37:45 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:50.279 [ 00:04:50.279 "bdev_malloc_delete", 00:04:50.279 "bdev_malloc_create", 00:04:50.279 "bdev_null_resize", 00:04:50.279 "bdev_null_delete", 00:04:50.279 "bdev_null_create", 00:04:50.279 "bdev_nvme_cuse_unregister", 00:04:50.279 "bdev_nvme_cuse_register", 00:04:50.279 "bdev_opal_new_user", 00:04:50.279 "bdev_opal_set_lock_state", 00:04:50.279 "bdev_opal_delete", 00:04:50.279 "bdev_opal_get_info", 00:04:50.279 "bdev_opal_create", 00:04:50.279 "bdev_nvme_opal_revert", 00:04:50.279 "bdev_nvme_opal_init", 00:04:50.279 "bdev_nvme_send_cmd", 00:04:50.279 "bdev_nvme_set_keys", 00:04:50.279 "bdev_nvme_get_path_iostat", 00:04:50.279 "bdev_nvme_get_mdns_discovery_info", 00:04:50.279 "bdev_nvme_stop_mdns_discovery", 00:04:50.279 "bdev_nvme_start_mdns_discovery", 00:04:50.279 "bdev_nvme_set_multipath_policy", 00:04:50.279 "bdev_nvme_set_preferred_path", 00:04:50.279 "bdev_nvme_get_io_paths", 00:04:50.279 "bdev_nvme_remove_error_injection", 00:04:50.279 "bdev_nvme_add_error_injection", 00:04:50.279 "bdev_nvme_get_discovery_info", 00:04:50.279 "bdev_nvme_stop_discovery", 00:04:50.279 "bdev_nvme_start_discovery", 00:04:50.279 "bdev_nvme_get_controller_health_info", 00:04:50.279 "bdev_nvme_disable_controller", 00:04:50.279 "bdev_nvme_enable_controller", 00:04:50.279 "bdev_nvme_reset_controller", 00:04:50.279 "bdev_nvme_get_transport_statistics", 00:04:50.279 "bdev_nvme_apply_firmware", 00:04:50.279 "bdev_nvme_detach_controller", 00:04:50.279 "bdev_nvme_get_controllers", 00:04:50.279 "bdev_nvme_attach_controller", 00:04:50.279 "bdev_nvme_set_hotplug", 00:04:50.279 "bdev_nvme_set_options", 00:04:50.279 "bdev_passthru_delete", 00:04:50.279 "bdev_passthru_create", 00:04:50.279 "bdev_lvol_set_parent_bdev", 00:04:50.279 "bdev_lvol_set_parent", 00:04:50.279 "bdev_lvol_check_shallow_copy", 00:04:50.279 "bdev_lvol_start_shallow_copy", 00:04:50.279 "bdev_lvol_grow_lvstore", 00:04:50.279 "bdev_lvol_get_lvols", 00:04:50.279 "bdev_lvol_get_lvstores", 00:04:50.279 "bdev_lvol_delete", 00:04:50.279 "bdev_lvol_set_read_only", 00:04:50.279 "bdev_lvol_resize", 00:04:50.279 "bdev_lvol_decouple_parent", 00:04:50.279 "bdev_lvol_inflate", 00:04:50.279 "bdev_lvol_rename", 00:04:50.279 "bdev_lvol_clone_bdev", 00:04:50.279 "bdev_lvol_clone", 00:04:50.279 "bdev_lvol_snapshot", 00:04:50.279 "bdev_lvol_create", 00:04:50.279 "bdev_lvol_delete_lvstore", 00:04:50.279 "bdev_lvol_rename_lvstore", 00:04:50.279 "bdev_lvol_create_lvstore", 00:04:50.279 "bdev_raid_set_options", 00:04:50.279 "bdev_raid_remove_base_bdev", 00:04:50.279 "bdev_raid_add_base_bdev", 00:04:50.279 "bdev_raid_delete", 00:04:50.279 "bdev_raid_create", 00:04:50.279 "bdev_raid_get_bdevs", 00:04:50.279 "bdev_error_inject_error", 00:04:50.279 "bdev_error_delete", 00:04:50.279 "bdev_error_create", 00:04:50.279 "bdev_split_delete", 00:04:50.279 "bdev_split_create", 00:04:50.279 "bdev_delay_delete", 00:04:50.279 "bdev_delay_create", 00:04:50.279 "bdev_delay_update_latency", 00:04:50.279 "bdev_zone_block_delete", 00:04:50.279 "bdev_zone_block_create", 00:04:50.280 "blobfs_create", 00:04:50.280 "blobfs_detect", 00:04:50.280 "blobfs_set_cache_size", 00:04:50.280 "bdev_aio_delete", 00:04:50.280 "bdev_aio_rescan", 00:04:50.280 "bdev_aio_create", 00:04:50.280 "bdev_ftl_set_property", 00:04:50.280 "bdev_ftl_get_properties", 00:04:50.280 "bdev_ftl_get_stats", 00:04:50.280 "bdev_ftl_unmap", 00:04:50.280 "bdev_ftl_unload", 00:04:50.280 "bdev_ftl_delete", 00:04:50.280 "bdev_ftl_load", 00:04:50.280 "bdev_ftl_create", 00:04:50.280 "bdev_virtio_attach_controller", 00:04:50.280 "bdev_virtio_scsi_get_devices", 00:04:50.280 "bdev_virtio_detach_controller", 00:04:50.280 "bdev_virtio_blk_set_hotplug", 00:04:50.280 "bdev_iscsi_delete", 00:04:50.280 "bdev_iscsi_create", 00:04:50.280 "bdev_iscsi_set_options", 00:04:50.280 "accel_error_inject_error", 00:04:50.280 "ioat_scan_accel_module", 00:04:50.280 "dsa_scan_accel_module", 00:04:50.280 "iaa_scan_accel_module", 00:04:50.280 "vfu_virtio_create_fs_endpoint", 00:04:50.280 "vfu_virtio_create_scsi_endpoint", 00:04:50.280 "vfu_virtio_scsi_remove_target", 00:04:50.280 "vfu_virtio_scsi_add_target", 00:04:50.280 "vfu_virtio_create_blk_endpoint", 00:04:50.280 "vfu_virtio_delete_endpoint", 00:04:50.280 "keyring_file_remove_key", 00:04:50.280 "keyring_file_add_key", 00:04:50.280 "keyring_linux_set_options", 00:04:50.280 "fsdev_aio_delete", 00:04:50.280 "fsdev_aio_create", 00:04:50.280 "iscsi_get_histogram", 00:04:50.280 "iscsi_enable_histogram", 00:04:50.280 "iscsi_set_options", 00:04:50.280 "iscsi_get_auth_groups", 00:04:50.280 "iscsi_auth_group_remove_secret", 00:04:50.280 "iscsi_auth_group_add_secret", 00:04:50.280 "iscsi_delete_auth_group", 00:04:50.280 "iscsi_create_auth_group", 00:04:50.280 "iscsi_set_discovery_auth", 00:04:50.280 "iscsi_get_options", 00:04:50.280 "iscsi_target_node_request_logout", 00:04:50.280 "iscsi_target_node_set_redirect", 00:04:50.280 "iscsi_target_node_set_auth", 00:04:50.280 "iscsi_target_node_add_lun", 00:04:50.280 "iscsi_get_stats", 00:04:50.280 "iscsi_get_connections", 00:04:50.280 "iscsi_portal_group_set_auth", 00:04:50.280 "iscsi_start_portal_group", 00:04:50.280 "iscsi_delete_portal_group", 00:04:50.280 "iscsi_create_portal_group", 00:04:50.280 "iscsi_get_portal_groups", 00:04:50.280 "iscsi_delete_target_node", 00:04:50.280 "iscsi_target_node_remove_pg_ig_maps", 00:04:50.280 "iscsi_target_node_add_pg_ig_maps", 00:04:50.280 "iscsi_create_target_node", 00:04:50.280 "iscsi_get_target_nodes", 00:04:50.280 "iscsi_delete_initiator_group", 00:04:50.280 "iscsi_initiator_group_remove_initiators", 00:04:50.280 "iscsi_initiator_group_add_initiators", 00:04:50.280 "iscsi_create_initiator_group", 00:04:50.280 "iscsi_get_initiator_groups", 00:04:50.280 "nvmf_set_crdt", 00:04:50.280 "nvmf_set_config", 00:04:50.280 "nvmf_set_max_subsystems", 00:04:50.280 "nvmf_stop_mdns_prr", 00:04:50.280 "nvmf_publish_mdns_prr", 00:04:50.280 "nvmf_subsystem_get_listeners", 00:04:50.280 "nvmf_subsystem_get_qpairs", 00:04:50.280 "nvmf_subsystem_get_controllers", 00:04:50.280 "nvmf_get_stats", 00:04:50.280 "nvmf_get_transports", 00:04:50.280 "nvmf_create_transport", 00:04:50.280 "nvmf_get_targets", 00:04:50.280 "nvmf_delete_target", 00:04:50.280 "nvmf_create_target", 00:04:50.280 "nvmf_subsystem_allow_any_host", 00:04:50.280 "nvmf_subsystem_set_keys", 00:04:50.280 "nvmf_subsystem_remove_host", 00:04:50.280 "nvmf_subsystem_add_host", 00:04:50.280 "nvmf_ns_remove_host", 00:04:50.280 "nvmf_ns_add_host", 00:04:50.280 "nvmf_subsystem_remove_ns", 00:04:50.280 "nvmf_subsystem_set_ns_ana_group", 00:04:50.280 "nvmf_subsystem_add_ns", 00:04:50.280 "nvmf_subsystem_listener_set_ana_state", 00:04:50.280 "nvmf_discovery_get_referrals", 00:04:50.280 "nvmf_discovery_remove_referral", 00:04:50.280 "nvmf_discovery_add_referral", 00:04:50.280 "nvmf_subsystem_remove_listener", 00:04:50.280 "nvmf_subsystem_add_listener", 00:04:50.280 "nvmf_delete_subsystem", 00:04:50.280 "nvmf_create_subsystem", 00:04:50.280 "nvmf_get_subsystems", 00:04:50.280 "env_dpdk_get_mem_stats", 00:04:50.280 "nbd_get_disks", 00:04:50.280 "nbd_stop_disk", 00:04:50.280 "nbd_start_disk", 00:04:50.280 "ublk_recover_disk", 00:04:50.280 "ublk_get_disks", 00:04:50.280 "ublk_stop_disk", 00:04:50.280 "ublk_start_disk", 00:04:50.280 "ublk_destroy_target", 00:04:50.280 "ublk_create_target", 00:04:50.280 "virtio_blk_create_transport", 00:04:50.280 "virtio_blk_get_transports", 00:04:50.280 "vhost_controller_set_coalescing", 00:04:50.280 "vhost_get_controllers", 00:04:50.280 "vhost_delete_controller", 00:04:50.280 "vhost_create_blk_controller", 00:04:50.280 "vhost_scsi_controller_remove_target", 00:04:50.280 "vhost_scsi_controller_add_target", 00:04:50.280 "vhost_start_scsi_controller", 00:04:50.280 "vhost_create_scsi_controller", 00:04:50.280 "thread_set_cpumask", 00:04:50.280 "scheduler_set_options", 00:04:50.280 "framework_get_governor", 00:04:50.280 "framework_get_scheduler", 00:04:50.280 "framework_set_scheduler", 00:04:50.280 "framework_get_reactors", 00:04:50.280 "thread_get_io_channels", 00:04:50.280 "thread_get_pollers", 00:04:50.280 "thread_get_stats", 00:04:50.280 "framework_monitor_context_switch", 00:04:50.280 "spdk_kill_instance", 00:04:50.280 "log_enable_timestamps", 00:04:50.280 "log_get_flags", 00:04:50.280 "log_clear_flag", 00:04:50.280 "log_set_flag", 00:04:50.280 "log_get_level", 00:04:50.280 "log_set_level", 00:04:50.280 "log_get_print_level", 00:04:50.280 "log_set_print_level", 00:04:50.280 "framework_enable_cpumask_locks", 00:04:50.280 "framework_disable_cpumask_locks", 00:04:50.280 "framework_wait_init", 00:04:50.280 "framework_start_init", 00:04:50.280 "scsi_get_devices", 00:04:50.280 "bdev_get_histogram", 00:04:50.280 "bdev_enable_histogram", 00:04:50.280 "bdev_set_qos_limit", 00:04:50.280 "bdev_set_qd_sampling_period", 00:04:50.280 "bdev_get_bdevs", 00:04:50.280 "bdev_reset_iostat", 00:04:50.280 "bdev_get_iostat", 00:04:50.280 "bdev_examine", 00:04:50.280 "bdev_wait_for_examine", 00:04:50.280 "bdev_set_options", 00:04:50.280 "accel_get_stats", 00:04:50.280 "accel_set_options", 00:04:50.280 "accel_set_driver", 00:04:50.280 "accel_crypto_key_destroy", 00:04:50.280 "accel_crypto_keys_get", 00:04:50.280 "accel_crypto_key_create", 00:04:50.280 "accel_assign_opc", 00:04:50.280 "accel_get_module_info", 00:04:50.280 "accel_get_opc_assignments", 00:04:50.280 "vmd_rescan", 00:04:50.280 "vmd_remove_device", 00:04:50.280 "vmd_enable", 00:04:50.280 "sock_get_default_impl", 00:04:50.280 "sock_set_default_impl", 00:04:50.280 "sock_impl_set_options", 00:04:50.280 "sock_impl_get_options", 00:04:50.280 "iobuf_get_stats", 00:04:50.280 "iobuf_set_options", 00:04:50.280 "keyring_get_keys", 00:04:50.280 "vfu_tgt_set_base_path", 00:04:50.280 "framework_get_pci_devices", 00:04:50.280 "framework_get_config", 00:04:50.280 "framework_get_subsystems", 00:04:50.280 "fsdev_set_opts", 00:04:50.280 "fsdev_get_opts", 00:04:50.280 "trace_get_info", 00:04:50.280 "trace_get_tpoint_group_mask", 00:04:50.280 "trace_disable_tpoint_group", 00:04:50.280 "trace_enable_tpoint_group", 00:04:50.280 "trace_clear_tpoint_mask", 00:04:50.280 "trace_set_tpoint_mask", 00:04:50.280 "notify_get_notifications", 00:04:50.280 "notify_get_types", 00:04:50.280 "spdk_get_version", 00:04:50.280 "rpc_get_methods" 00:04:50.280 ] 00:04:50.280 15:37:45 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:50.280 15:37:45 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:50.280 15:37:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.280 15:37:45 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:50.280 15:37:45 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1809819 00:04:50.280 15:37:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1809819 ']' 00:04:50.280 15:37:45 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1809819 00:04:50.280 15:37:45 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:50.280 15:37:45 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.280 15:37:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1809819 00:04:50.539 15:37:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.539 15:37:45 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.539 15:37:45 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1809819' 00:04:50.539 killing process with pid 1809819 00:04:50.539 15:37:45 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1809819 00:04:50.539 15:37:45 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1809819 00:04:50.798 00:04:50.798 real 0m1.130s 00:04:50.798 user 0m1.903s 00:04:50.798 sys 0m0.450s 00:04:50.798 15:37:45 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.798 15:37:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.798 ************************************ 00:04:50.798 END TEST spdkcli_tcp 00:04:50.798 ************************************ 00:04:50.798 15:37:45 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:50.798 15:37:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.798 15:37:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.798 15:37:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.798 ************************************ 00:04:50.798 START TEST dpdk_mem_utility 00:04:50.798 ************************************ 00:04:50.798 15:37:45 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:50.798 * Looking for test storage... 00:04:50.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:50.798 15:37:45 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:50.798 15:37:45 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:50.798 15:37:45 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.058 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.058 15:37:46 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:51.058 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.058 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.058 --rc genhtml_branch_coverage=1 00:04:51.058 --rc genhtml_function_coverage=1 00:04:51.058 --rc genhtml_legend=1 00:04:51.058 --rc geninfo_all_blocks=1 00:04:51.058 --rc geninfo_unexecuted_blocks=1 00:04:51.058 00:04:51.058 ' 00:04:51.058 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.058 --rc genhtml_branch_coverage=1 00:04:51.058 --rc genhtml_function_coverage=1 00:04:51.058 --rc genhtml_legend=1 00:04:51.058 --rc geninfo_all_blocks=1 00:04:51.058 --rc geninfo_unexecuted_blocks=1 00:04:51.058 00:04:51.058 ' 00:04:51.058 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.058 --rc genhtml_branch_coverage=1 00:04:51.058 --rc genhtml_function_coverage=1 00:04:51.058 --rc genhtml_legend=1 00:04:51.058 --rc geninfo_all_blocks=1 00:04:51.058 --rc geninfo_unexecuted_blocks=1 00:04:51.058 00:04:51.058 ' 00:04:51.058 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.058 --rc genhtml_branch_coverage=1 00:04:51.058 --rc genhtml_function_coverage=1 00:04:51.058 --rc genhtml_legend=1 00:04:51.058 --rc geninfo_all_blocks=1 00:04:51.058 --rc geninfo_unexecuted_blocks=1 00:04:51.058 00:04:51.058 ' 00:04:51.058 15:37:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:51.058 15:37:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1810118 00:04:51.058 15:37:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1810118 00:04:51.058 15:37:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.058 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1810118 ']' 00:04:51.058 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.058 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.058 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.058 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.058 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.058 [2024-12-09 15:37:46.119808] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:04:51.058 [2024-12-09 15:37:46.119858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1810118 ] 00:04:51.058 [2024-12-09 15:37:46.193301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.058 [2024-12-09 15:37:46.231605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.320 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.320 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:51.320 15:37:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:51.320 15:37:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:51.320 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.320 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.320 { 00:04:51.320 "filename": "/tmp/spdk_mem_dump.txt" 00:04:51.320 } 00:04:51.320 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.320 15:37:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:51.320 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:51.320 1 heaps totaling size 818.000000 MiB 00:04:51.320 size: 818.000000 MiB heap id: 0 00:04:51.320 end heaps---------- 00:04:51.320 9 mempools totaling size 603.782043 MiB 00:04:51.320 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:51.320 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:51.320 size: 100.555481 MiB name: bdev_io_1810118 00:04:51.320 size: 50.003479 MiB name: msgpool_1810118 00:04:51.320 size: 36.509338 MiB name: fsdev_io_1810118 00:04:51.320 size: 21.763794 MiB name: PDU_Pool 00:04:51.320 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:51.320 size: 4.133484 MiB name: evtpool_1810118 00:04:51.320 size: 0.026123 MiB name: Session_Pool 00:04:51.320 end mempools------- 00:04:51.320 6 memzones totaling size 4.142822 MiB 00:04:51.320 size: 1.000366 MiB name: RG_ring_0_1810118 00:04:51.320 size: 1.000366 MiB name: RG_ring_1_1810118 00:04:51.320 size: 1.000366 MiB name: RG_ring_4_1810118 00:04:51.320 size: 1.000366 MiB name: RG_ring_5_1810118 00:04:51.320 size: 0.125366 MiB name: RG_ring_2_1810118 00:04:51.320 size: 0.015991 MiB name: RG_ring_3_1810118 00:04:51.320 end memzones------- 00:04:51.320 15:37:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:51.320 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:51.320 list of free elements. size: 10.852478 MiB 00:04:51.320 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:51.320 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:51.320 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:51.320 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:51.320 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:51.320 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:51.320 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:51.320 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:51.320 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:51.320 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:51.320 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:51.320 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:51.320 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:51.320 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:51.320 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:51.320 list of standard malloc elements. size: 199.218628 MiB 00:04:51.320 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:51.320 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:51.320 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:51.320 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:51.320 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:51.320 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:51.320 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:51.320 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:51.320 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:51.320 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:51.320 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:51.320 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:51.320 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:51.320 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:51.320 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:51.320 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:51.320 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:51.320 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:51.320 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:51.320 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:51.320 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:51.320 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:51.320 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:51.320 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:51.320 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:51.320 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:51.320 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:51.320 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:51.320 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:51.320 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:51.320 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:51.320 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:51.320 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:51.320 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:51.320 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:51.320 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:51.320 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:51.320 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:51.320 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:51.320 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:51.320 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:51.320 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:51.320 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:51.320 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:51.320 list of memzone associated elements. size: 607.928894 MiB 00:04:51.320 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:51.321 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:51.321 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:51.321 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:51.321 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:51.321 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1810118_0 00:04:51.321 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:51.321 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1810118_0 00:04:51.321 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:51.321 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1810118_0 00:04:51.321 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:51.321 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:51.321 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:51.321 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:51.321 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:51.321 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1810118_0 00:04:51.321 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:51.321 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1810118 00:04:51.321 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:51.321 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1810118 00:04:51.321 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:51.321 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:51.321 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:51.321 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:51.321 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:51.321 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:51.321 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:51.321 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:51.321 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:51.321 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1810118 00:04:51.321 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:51.321 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1810118 00:04:51.321 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:51.321 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1810118 00:04:51.321 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:51.321 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1810118 00:04:51.321 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:51.321 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1810118 00:04:51.321 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:51.321 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1810118 00:04:51.321 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:51.321 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:51.321 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:51.321 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:51.321 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:51.321 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:51.321 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:51.321 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1810118 00:04:51.321 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:51.321 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1810118 00:04:51.321 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:51.321 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:51.321 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:51.321 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:51.321 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:51.321 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1810118 00:04:51.321 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:51.321 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:51.321 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:51.321 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1810118 00:04:51.321 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:51.321 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1810118 00:04:51.321 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:51.321 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1810118 00:04:51.321 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:51.321 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:51.582 15:37:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:51.582 15:37:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1810118 00:04:51.582 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1810118 ']' 00:04:51.582 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1810118 00:04:51.582 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:51.582 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.582 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1810118 00:04:51.582 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.582 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.582 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1810118' 00:04:51.582 killing process with pid 1810118 00:04:51.582 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1810118 00:04:51.582 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1810118 00:04:51.840 00:04:51.840 real 0m1.004s 00:04:51.840 user 0m0.965s 00:04:51.840 sys 0m0.379s 00:04:51.840 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.840 15:37:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.840 ************************************ 00:04:51.840 END TEST dpdk_mem_utility 00:04:51.840 ************************************ 00:04:51.840 15:37:46 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:51.840 15:37:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.840 15:37:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.840 15:37:46 -- common/autotest_common.sh@10 -- # set +x 00:04:51.840 ************************************ 00:04:51.840 START TEST event 00:04:51.840 ************************************ 00:04:51.840 15:37:46 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:51.840 * Looking for test storage... 00:04:51.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:51.840 15:37:47 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.840 15:37:47 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.840 15:37:47 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:52.099 15:37:47 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:52.099 15:37:47 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.099 15:37:47 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.099 15:37:47 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.099 15:37:47 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.099 15:37:47 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.099 15:37:47 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.099 15:37:47 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.099 15:37:47 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.099 15:37:47 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.099 15:37:47 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.099 15:37:47 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.099 15:37:47 event -- scripts/common.sh@344 -- # case "$op" in 00:04:52.099 15:37:47 event -- scripts/common.sh@345 -- # : 1 00:04:52.099 15:37:47 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.099 15:37:47 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.099 15:37:47 event -- scripts/common.sh@365 -- # decimal 1 00:04:52.099 15:37:47 event -- scripts/common.sh@353 -- # local d=1 00:04:52.099 15:37:47 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.099 15:37:47 event -- scripts/common.sh@355 -- # echo 1 00:04:52.099 15:37:47 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.099 15:37:47 event -- scripts/common.sh@366 -- # decimal 2 00:04:52.099 15:37:47 event -- scripts/common.sh@353 -- # local d=2 00:04:52.099 15:37:47 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.099 15:37:47 event -- scripts/common.sh@355 -- # echo 2 00:04:52.099 15:37:47 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.099 15:37:47 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.099 15:37:47 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.099 15:37:47 event -- scripts/common.sh@368 -- # return 0 00:04:52.099 15:37:47 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.099 15:37:47 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:52.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.099 --rc genhtml_branch_coverage=1 00:04:52.099 --rc genhtml_function_coverage=1 00:04:52.099 --rc genhtml_legend=1 00:04:52.099 --rc geninfo_all_blocks=1 00:04:52.099 --rc geninfo_unexecuted_blocks=1 00:04:52.099 00:04:52.099 ' 00:04:52.099 15:37:47 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:52.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.099 --rc genhtml_branch_coverage=1 00:04:52.099 --rc genhtml_function_coverage=1 00:04:52.099 --rc genhtml_legend=1 00:04:52.099 --rc geninfo_all_blocks=1 00:04:52.099 --rc geninfo_unexecuted_blocks=1 00:04:52.099 00:04:52.099 ' 00:04:52.099 15:37:47 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:52.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.099 --rc genhtml_branch_coverage=1 00:04:52.099 --rc genhtml_function_coverage=1 00:04:52.099 --rc genhtml_legend=1 00:04:52.099 --rc geninfo_all_blocks=1 00:04:52.099 --rc geninfo_unexecuted_blocks=1 00:04:52.099 00:04:52.099 ' 00:04:52.099 15:37:47 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:52.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.099 --rc genhtml_branch_coverage=1 00:04:52.099 --rc genhtml_function_coverage=1 00:04:52.099 --rc genhtml_legend=1 00:04:52.099 --rc geninfo_all_blocks=1 00:04:52.099 --rc geninfo_unexecuted_blocks=1 00:04:52.099 00:04:52.099 ' 00:04:52.099 15:37:47 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:52.099 15:37:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:52.099 15:37:47 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:52.099 15:37:47 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:52.099 15:37:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.099 15:37:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.099 ************************************ 00:04:52.099 START TEST event_perf 00:04:52.099 ************************************ 00:04:52.099 15:37:47 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:52.099 Running I/O for 1 seconds...[2024-12-09 15:37:47.190624] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:04:52.099 [2024-12-09 15:37:47.190692] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1810407 ] 00:04:52.099 [2024-12-09 15:37:47.269802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:52.099 [2024-12-09 15:37:47.311929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.099 [2024-12-09 15:37:47.312040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:52.099 [2024-12-09 15:37:47.312147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:52.099 [2024-12-09 15:37:47.312146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.477 Running I/O for 1 seconds... 00:04:53.477 lcore 0: 205112 00:04:53.477 lcore 1: 205112 00:04:53.477 lcore 2: 205112 00:04:53.477 lcore 3: 205113 00:04:53.477 done. 00:04:53.477 00:04:53.477 real 0m1.183s 00:04:53.477 user 0m4.101s 00:04:53.477 sys 0m0.078s 00:04:53.477 15:37:48 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.477 15:37:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:53.478 ************************************ 00:04:53.478 END TEST event_perf 00:04:53.478 ************************************ 00:04:53.478 15:37:48 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:53.478 15:37:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:53.478 15:37:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.478 15:37:48 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.478 ************************************ 00:04:53.478 START TEST event_reactor 00:04:53.478 ************************************ 00:04:53.478 15:37:48 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:53.478 [2024-12-09 15:37:48.442017] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:04:53.478 [2024-12-09 15:37:48.442086] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1810657 ] 00:04:53.478 [2024-12-09 15:37:48.520474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.478 [2024-12-09 15:37:48.558225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.414 test_start 00:04:54.414 oneshot 00:04:54.414 tick 100 00:04:54.414 tick 100 00:04:54.414 tick 250 00:04:54.414 tick 100 00:04:54.414 tick 100 00:04:54.414 tick 250 00:04:54.414 tick 100 00:04:54.414 tick 500 00:04:54.414 tick 100 00:04:54.414 tick 100 00:04:54.414 tick 250 00:04:54.414 tick 100 00:04:54.414 tick 100 00:04:54.414 test_end 00:04:54.414 00:04:54.414 real 0m1.172s 00:04:54.414 user 0m1.098s 00:04:54.414 sys 0m0.070s 00:04:54.414 15:37:49 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.414 15:37:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:54.414 ************************************ 00:04:54.414 END TEST event_reactor 00:04:54.414 ************************************ 00:04:54.414 15:37:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:54.414 15:37:49 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:54.414 15:37:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.414 15:37:49 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.673 ************************************ 00:04:54.673 START TEST event_reactor_perf 00:04:54.673 ************************************ 00:04:54.673 15:37:49 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:54.673 [2024-12-09 15:37:49.686059] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:04:54.673 [2024-12-09 15:37:49.686117] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1810902 ] 00:04:54.673 [2024-12-09 15:37:49.762693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.673 [2024-12-09 15:37:49.801532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.051 test_start 00:04:56.051 test_end 00:04:56.051 Performance: 502121 events per second 00:04:56.051 00:04:56.051 real 0m1.177s 00:04:56.051 user 0m1.094s 00:04:56.051 sys 0m0.078s 00:04:56.051 15:37:50 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.051 15:37:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:56.051 ************************************ 00:04:56.051 END TEST event_reactor_perf 00:04:56.051 ************************************ 00:04:56.051 15:37:50 event -- event/event.sh@49 -- # uname -s 00:04:56.051 15:37:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:56.051 15:37:50 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:56.051 15:37:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.051 15:37:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.051 15:37:50 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.051 ************************************ 00:04:56.051 START TEST event_scheduler 00:04:56.051 ************************************ 00:04:56.051 15:37:50 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:56.051 * Looking for test storage... 00:04:56.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:56.051 15:37:51 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:56.051 15:37:51 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:56.051 15:37:51 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:56.051 15:37:51 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.051 15:37:51 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:56.051 15:37:51 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.051 15:37:51 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:56.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.051 --rc genhtml_branch_coverage=1 00:04:56.051 --rc genhtml_function_coverage=1 00:04:56.051 --rc genhtml_legend=1 00:04:56.051 --rc geninfo_all_blocks=1 00:04:56.051 --rc geninfo_unexecuted_blocks=1 00:04:56.051 00:04:56.051 ' 00:04:56.051 15:37:51 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:56.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.051 --rc genhtml_branch_coverage=1 00:04:56.051 --rc genhtml_function_coverage=1 00:04:56.051 --rc genhtml_legend=1 00:04:56.052 --rc geninfo_all_blocks=1 00:04:56.052 --rc geninfo_unexecuted_blocks=1 00:04:56.052 00:04:56.052 ' 00:04:56.052 15:37:51 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:56.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.052 --rc genhtml_branch_coverage=1 00:04:56.052 --rc genhtml_function_coverage=1 00:04:56.052 --rc genhtml_legend=1 00:04:56.052 --rc geninfo_all_blocks=1 00:04:56.052 --rc geninfo_unexecuted_blocks=1 00:04:56.052 00:04:56.052 ' 00:04:56.052 15:37:51 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:56.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.052 --rc genhtml_branch_coverage=1 00:04:56.052 --rc genhtml_function_coverage=1 00:04:56.052 --rc genhtml_legend=1 00:04:56.052 --rc geninfo_all_blocks=1 00:04:56.052 --rc geninfo_unexecuted_blocks=1 00:04:56.052 00:04:56.052 ' 00:04:56.052 15:37:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:56.052 15:37:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1811185 00:04:56.052 15:37:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.052 15:37:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:56.052 15:37:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1811185 00:04:56.052 15:37:51 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1811185 ']' 00:04:56.052 15:37:51 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.052 15:37:51 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.052 15:37:51 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.052 15:37:51 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.052 15:37:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.052 [2024-12-09 15:37:51.135946] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:04:56.052 [2024-12-09 15:37:51.135995] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1811185 ] 00:04:56.052 [2024-12-09 15:37:51.209998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:56.052 [2024-12-09 15:37:51.251053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.052 [2024-12-09 15:37:51.251163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.052 [2024-12-09 15:37:51.251271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.052 [2024-12-09 15:37:51.251271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:56.311 15:37:51 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.311 15:37:51 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:56.311 15:37:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:56.311 15:37:51 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.311 15:37:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.311 [2024-12-09 15:37:51.291891] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:56.311 [2024-12-09 15:37:51.291913] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:56.311 [2024-12-09 15:37:51.291922] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:56.311 [2024-12-09 15:37:51.291927] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:56.311 [2024-12-09 15:37:51.291932] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:56.311 15:37:51 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.311 15:37:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:56.311 15:37:51 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.311 15:37:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.311 [2024-12-09 15:37:51.370086] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:56.311 15:37:51 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.311 15:37:51 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:56.311 15:37:51 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.311 15:37:51 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.311 15:37:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.311 ************************************ 00:04:56.311 START TEST scheduler_create_thread 00:04:56.311 ************************************ 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.311 2 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.311 3 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.311 4 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.311 5 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.311 6 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.311 7 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.311 8 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.311 9 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.311 10 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.311 15:37:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.245 15:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.245 15:37:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:57.245 15:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.245 15:37:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.618 15:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.618 15:37:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:58.618 15:37:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:58.618 15:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.618 15:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.997 15:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.997 00:04:59.997 real 0m3.384s 00:04:59.997 user 0m0.023s 00:04:59.997 sys 0m0.006s 00:04:59.997 15:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.997 15:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.997 ************************************ 00:04:59.997 END TEST scheduler_create_thread 00:04:59.997 ************************************ 00:04:59.997 15:37:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:59.997 15:37:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1811185 00:04:59.997 15:37:54 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1811185 ']' 00:04:59.997 15:37:54 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1811185 00:04:59.997 15:37:54 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:59.997 15:37:54 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.997 15:37:54 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1811185 00:04:59.997 15:37:54 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:59.997 15:37:54 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:59.997 15:37:54 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1811185' 00:04:59.997 killing process with pid 1811185 00:04:59.997 15:37:54 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1811185 00:04:59.997 15:37:54 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1811185 00:04:59.997 [2024-12-09 15:37:55.166155] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:00.257 00:05:00.257 real 0m4.452s 00:05:00.257 user 0m7.799s 00:05:00.257 sys 0m0.353s 00:05:00.257 15:37:55 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.257 15:37:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.257 ************************************ 00:05:00.257 END TEST event_scheduler 00:05:00.257 ************************************ 00:05:00.257 15:37:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:00.257 15:37:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:00.257 15:37:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.257 15:37:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.257 15:37:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.257 ************************************ 00:05:00.257 START TEST app_repeat 00:05:00.257 ************************************ 00:05:00.257 15:37:55 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:00.257 15:37:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.257 15:37:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.257 15:37:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:00.257 15:37:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.257 15:37:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:00.257 15:37:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:00.257 15:37:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:00.257 15:37:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1811921 00:05:00.257 15:37:55 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:00.258 15:37:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.258 15:37:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1811921' 00:05:00.258 Process app_repeat pid: 1811921 00:05:00.258 15:37:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.258 15:37:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:00.258 spdk_app_start Round 0 00:05:00.258 15:37:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1811921 /var/tmp/spdk-nbd.sock 00:05:00.258 15:37:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1811921 ']' 00:05:00.258 15:37:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.258 15:37:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.258 15:37:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.258 15:37:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.258 15:37:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.258 [2024-12-09 15:37:55.479250] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:00.258 [2024-12-09 15:37:55.479301] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1811921 ] 00:05:00.516 [2024-12-09 15:37:55.552500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.516 [2024-12-09 15:37:55.591400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.516 [2024-12-09 15:37:55.591402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.516 15:37:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.516 15:37:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:00.516 15:37:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.775 Malloc0 00:05:00.775 15:37:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.034 Malloc1 00:05:01.034 15:37:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.034 15:37:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.034 15:37:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.034 15:37:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.034 15:37:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.034 15:37:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.034 15:37:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.034 15:37:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.034 15:37:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.034 15:37:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.034 15:37:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.034 15:37:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.034 15:37:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.034 15:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.034 15:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.034 15:37:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:01.293 /dev/nbd0 00:05:01.293 15:37:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:01.293 15:37:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:01.293 15:37:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:01.293 15:37:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:01.293 15:37:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:01.293 15:37:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:01.293 15:37:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:01.293 15:37:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:01.293 15:37:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:01.293 15:37:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:01.293 15:37:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.293 1+0 records in 00:05:01.293 1+0 records out 00:05:01.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000104922 s, 39.0 MB/s 00:05:01.293 15:37:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.293 15:37:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:01.293 15:37:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.293 15:37:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:01.293 15:37:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:01.293 15:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.293 15:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.293 15:37:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:01.552 /dev/nbd1 00:05:01.552 15:37:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.552 15:37:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.552 15:37:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:01.552 15:37:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:01.552 15:37:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:01.552 15:37:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:01.552 15:37:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:01.552 15:37:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:01.552 15:37:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:01.552 15:37:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:01.552 15:37:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.552 1+0 records in 00:05:01.552 1+0 records out 00:05:01.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241835 s, 16.9 MB/s 00:05:01.552 15:37:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.552 15:37:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:01.552 15:37:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.552 15:37:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:01.552 15:37:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:01.552 15:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.552 15:37:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.552 15:37:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.552 15:37:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.552 15:37:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.811 { 00:05:01.811 "nbd_device": "/dev/nbd0", 00:05:01.811 "bdev_name": "Malloc0" 00:05:01.811 }, 00:05:01.811 { 00:05:01.811 "nbd_device": "/dev/nbd1", 00:05:01.811 "bdev_name": "Malloc1" 00:05:01.811 } 00:05:01.811 ]' 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.811 { 00:05:01.811 "nbd_device": "/dev/nbd0", 00:05:01.811 "bdev_name": "Malloc0" 00:05:01.811 }, 00:05:01.811 { 00:05:01.811 "nbd_device": "/dev/nbd1", 00:05:01.811 "bdev_name": "Malloc1" 00:05:01.811 } 00:05:01.811 ]' 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.811 /dev/nbd1' 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.811 /dev/nbd1' 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.811 256+0 records in 00:05:01.811 256+0 records out 00:05:01.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00324998 s, 323 MB/s 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.811 256+0 records in 00:05:01.811 256+0 records out 00:05:01.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137219 s, 76.4 MB/s 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.811 256+0 records in 00:05:01.811 256+0 records out 00:05:01.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145945 s, 71.8 MB/s 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.811 15:37:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:02.070 15:37:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:02.070 15:37:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:02.070 15:37:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:02.070 15:37:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.070 15:37:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.070 15:37:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:02.070 15:37:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.070 15:37:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.070 15:37:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.070 15:37:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:02.329 15:37:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:02.329 15:37:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:02.329 15:37:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:02.329 15:37:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.329 15:37:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.329 15:37:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:02.329 15:37:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.329 15:37:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.329 15:37:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.329 15:37:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.329 15:37:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.588 15:37:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:02.588 15:37:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:02.588 15:37:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.588 15:37:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.588 15:37:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.588 15:37:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.588 15:37:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.588 15:37:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.588 15:37:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.588 15:37:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.588 15:37:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.588 15:37:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.588 15:37:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.846 15:37:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.846 [2024-12-09 15:37:57.959417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.846 [2024-12-09 15:37:57.994912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.846 [2024-12-09 15:37:57.994912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.846 [2024-12-09 15:37:58.035029] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.846 [2024-12-09 15:37:58.035068] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:06.133 15:38:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:06.133 15:38:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:06.133 spdk_app_start Round 1 00:05:06.133 15:38:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1811921 /var/tmp/spdk-nbd.sock 00:05:06.133 15:38:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1811921 ']' 00:05:06.133 15:38:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.133 15:38:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.133 15:38:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.133 15:38:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.133 15:38:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.133 15:38:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.133 15:38:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:06.133 15:38:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.133 Malloc0 00:05:06.133 15:38:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.391 Malloc1 00:05:06.391 15:38:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.391 15:38:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.391 15:38:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.391 15:38:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:06.391 15:38:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.391 15:38:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:06.391 15:38:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.392 15:38:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.392 15:38:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.392 15:38:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:06.392 15:38:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.392 15:38:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:06.392 15:38:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:06.392 15:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:06.392 15:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.392 15:38:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.650 /dev/nbd0 00:05:06.650 15:38:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.650 15:38:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.650 15:38:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:06.650 15:38:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:06.650 15:38:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:06.650 15:38:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:06.650 15:38:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:06.650 15:38:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:06.650 15:38:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:06.650 15:38:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:06.650 15:38:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.650 1+0 records in 00:05:06.650 1+0 records out 00:05:06.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238193 s, 17.2 MB/s 00:05:06.650 15:38:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.650 15:38:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:06.650 15:38:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.650 15:38:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:06.650 15:38:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:06.650 15:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.650 15:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.650 15:38:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.908 /dev/nbd1 00:05:06.908 15:38:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.908 15:38:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.908 15:38:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:06.908 15:38:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:06.908 15:38:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:06.908 15:38:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:06.908 15:38:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:06.908 15:38:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:06.908 15:38:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:06.908 15:38:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:06.908 15:38:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.908 1+0 records in 00:05:06.908 1+0 records out 00:05:06.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253271 s, 16.2 MB/s 00:05:06.908 15:38:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.908 15:38:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:06.908 15:38:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.908 15:38:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:06.908 15:38:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:06.908 15:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.908 15:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.908 15:38:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.908 15:38:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.908 15:38:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.908 15:38:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.908 { 00:05:06.908 "nbd_device": "/dev/nbd0", 00:05:06.908 "bdev_name": "Malloc0" 00:05:06.908 }, 00:05:06.908 { 00:05:06.908 "nbd_device": "/dev/nbd1", 00:05:06.908 "bdev_name": "Malloc1" 00:05:06.908 } 00:05:06.908 ]' 00:05:06.908 15:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.908 { 00:05:06.908 "nbd_device": "/dev/nbd0", 00:05:06.908 "bdev_name": "Malloc0" 00:05:06.908 }, 00:05:06.908 { 00:05:06.908 "nbd_device": "/dev/nbd1", 00:05:06.908 "bdev_name": "Malloc1" 00:05:06.908 } 00:05:06.908 ]' 00:05:06.908 15:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:07.167 /dev/nbd1' 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:07.167 /dev/nbd1' 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:07.167 256+0 records in 00:05:07.167 256+0 records out 00:05:07.167 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101483 s, 103 MB/s 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:07.167 256+0 records in 00:05:07.167 256+0 records out 00:05:07.167 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014202 s, 73.8 MB/s 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:07.167 256+0 records in 00:05:07.167 256+0 records out 00:05:07.167 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015145 s, 69.2 MB/s 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.167 15:38:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:07.426 15:38:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:07.426 15:38:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:07.426 15:38:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:07.426 15:38:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.426 15:38:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.426 15:38:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:07.426 15:38:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.426 15:38:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.426 15:38:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.426 15:38:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.684 15:38:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.684 15:38:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.684 15:38:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.684 15:38:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.684 15:38:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.684 15:38:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.684 15:38:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.684 15:38:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.684 15:38:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.684 15:38:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.684 15:38:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.684 15:38:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.684 15:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.684 15:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.684 15:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.942 15:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.942 15:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.942 15:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.942 15:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.942 15:38:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.943 15:38:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.943 15:38:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.943 15:38:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.943 15:38:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.943 15:38:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:08.201 [2024-12-09 15:38:03.287375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.201 [2024-12-09 15:38:03.322662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.201 [2024-12-09 15:38:03.322663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.201 [2024-12-09 15:38:03.363781] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:08.201 [2024-12-09 15:38:03.363820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.482 15:38:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:11.482 15:38:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:11.482 spdk_app_start Round 2 00:05:11.482 15:38:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1811921 /var/tmp/spdk-nbd.sock 00:05:11.482 15:38:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1811921 ']' 00:05:11.482 15:38:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.482 15:38:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.482 15:38:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.482 15:38:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.482 15:38:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.482 15:38:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.482 15:38:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:11.482 15:38:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.482 Malloc0 00:05:11.482 15:38:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.741 Malloc1 00:05:11.741 15:38:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.741 15:38:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.741 15:38:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.741 15:38:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.741 15:38:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.741 15:38:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.741 15:38:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.741 15:38:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.741 15:38:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.741 15:38:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.741 15:38:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.741 15:38:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.741 15:38:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.741 15:38:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.741 15:38:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.741 15:38:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.999 /dev/nbd0 00:05:11.999 15:38:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.999 15:38:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.999 15:38:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:11.999 15:38:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:11.999 15:38:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:11.999 15:38:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:11.999 15:38:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:11.999 15:38:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:11.999 15:38:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:11.999 15:38:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:11.999 15:38:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.999 1+0 records in 00:05:11.999 1+0 records out 00:05:11.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018986 s, 21.6 MB/s 00:05:11.999 15:38:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.999 15:38:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:11.999 15:38:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.999 15:38:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:11.999 15:38:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:11.999 15:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.999 15:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.999 15:38:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.257 /dev/nbd1 00:05:12.257 15:38:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.257 15:38:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.257 15:38:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:12.257 15:38:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.257 15:38:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.257 15:38:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.257 15:38:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:12.257 15:38:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.257 15:38:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.257 15:38:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.257 15:38:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.257 1+0 records in 00:05:12.257 1+0 records out 00:05:12.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236364 s, 17.3 MB/s 00:05:12.257 15:38:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.257 15:38:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.257 15:38:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.257 15:38:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.257 15:38:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.257 15:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.257 15:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.257 15:38:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.257 15:38:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.257 15:38:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.257 15:38:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.257 { 00:05:12.257 "nbd_device": "/dev/nbd0", 00:05:12.257 "bdev_name": "Malloc0" 00:05:12.257 }, 00:05:12.257 { 00:05:12.257 "nbd_device": "/dev/nbd1", 00:05:12.257 "bdev_name": "Malloc1" 00:05:12.257 } 00:05:12.257 ]' 00:05:12.257 15:38:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.257 { 00:05:12.257 "nbd_device": "/dev/nbd0", 00:05:12.257 "bdev_name": "Malloc0" 00:05:12.257 }, 00:05:12.257 { 00:05:12.257 "nbd_device": "/dev/nbd1", 00:05:12.257 "bdev_name": "Malloc1" 00:05:12.257 } 00:05:12.257 ]' 00:05:12.257 15:38:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.515 /dev/nbd1' 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.515 /dev/nbd1' 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.515 256+0 records in 00:05:12.515 256+0 records out 00:05:12.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103233 s, 102 MB/s 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.515 256+0 records in 00:05:12.515 256+0 records out 00:05:12.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141837 s, 73.9 MB/s 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.515 256+0 records in 00:05:12.515 256+0 records out 00:05:12.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146526 s, 71.6 MB/s 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.515 15:38:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.772 15:38:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.772 15:38:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.772 15:38:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.772 15:38:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.772 15:38:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.772 15:38:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.772 15:38:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.772 15:38:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.772 15:38:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.772 15:38:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.031 15:38:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.031 15:38:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.031 15:38:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.031 15:38:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.031 15:38:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.031 15:38:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.031 15:38:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.031 15:38:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.031 15:38:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.031 15:38:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.031 15:38:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.031 15:38:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.031 15:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.031 15:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.289 15:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.289 15:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.289 15:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.289 15:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.289 15:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.289 15:38:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.289 15:38:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.289 15:38:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.289 15:38:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.289 15:38:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.289 15:38:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.547 [2024-12-09 15:38:08.651072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.547 [2024-12-09 15:38:08.686707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.547 [2024-12-09 15:38:08.686707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.547 [2024-12-09 15:38:08.727343] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.547 [2024-12-09 15:38:08.727384] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.829 15:38:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1811921 /var/tmp/spdk-nbd.sock 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1811921 ']' 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:16.829 15:38:11 event.app_repeat -- event/event.sh@39 -- # killprocess 1811921 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1811921 ']' 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1811921 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1811921 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1811921' 00:05:16.829 killing process with pid 1811921 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1811921 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1811921 00:05:16.829 spdk_app_start is called in Round 0. 00:05:16.829 Shutdown signal received, stop current app iteration 00:05:16.829 Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 reinitialization... 00:05:16.829 spdk_app_start is called in Round 1. 00:05:16.829 Shutdown signal received, stop current app iteration 00:05:16.829 Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 reinitialization... 00:05:16.829 spdk_app_start is called in Round 2. 00:05:16.829 Shutdown signal received, stop current app iteration 00:05:16.829 Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 reinitialization... 00:05:16.829 spdk_app_start is called in Round 3. 00:05:16.829 Shutdown signal received, stop current app iteration 00:05:16.829 15:38:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:16.829 15:38:11 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:16.829 00:05:16.829 real 0m16.442s 00:05:16.829 user 0m36.194s 00:05:16.829 sys 0m2.544s 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.829 15:38:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.829 ************************************ 00:05:16.829 END TEST app_repeat 00:05:16.829 ************************************ 00:05:16.829 15:38:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:16.829 15:38:11 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:16.829 15:38:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.829 15:38:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.829 15:38:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.829 ************************************ 00:05:16.829 START TEST cpu_locks 00:05:16.830 ************************************ 00:05:16.830 15:38:11 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:16.830 * Looking for test storage... 00:05:16.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:16.830 15:38:12 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:16.830 15:38:12 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:16.830 15:38:12 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:17.089 15:38:12 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.089 15:38:12 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:17.089 15:38:12 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.089 15:38:12 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:17.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.089 --rc genhtml_branch_coverage=1 00:05:17.089 --rc genhtml_function_coverage=1 00:05:17.089 --rc genhtml_legend=1 00:05:17.089 --rc geninfo_all_blocks=1 00:05:17.089 --rc geninfo_unexecuted_blocks=1 00:05:17.089 00:05:17.089 ' 00:05:17.089 15:38:12 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:17.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.089 --rc genhtml_branch_coverage=1 00:05:17.089 --rc genhtml_function_coverage=1 00:05:17.089 --rc genhtml_legend=1 00:05:17.089 --rc geninfo_all_blocks=1 00:05:17.089 --rc geninfo_unexecuted_blocks=1 00:05:17.089 00:05:17.089 ' 00:05:17.089 15:38:12 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:17.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.089 --rc genhtml_branch_coverage=1 00:05:17.089 --rc genhtml_function_coverage=1 00:05:17.089 --rc genhtml_legend=1 00:05:17.089 --rc geninfo_all_blocks=1 00:05:17.089 --rc geninfo_unexecuted_blocks=1 00:05:17.089 00:05:17.089 ' 00:05:17.089 15:38:12 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:17.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.089 --rc genhtml_branch_coverage=1 00:05:17.089 --rc genhtml_function_coverage=1 00:05:17.089 --rc genhtml_legend=1 00:05:17.089 --rc geninfo_all_blocks=1 00:05:17.089 --rc geninfo_unexecuted_blocks=1 00:05:17.089 00:05:17.089 ' 00:05:17.089 15:38:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:17.089 15:38:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:17.089 15:38:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:17.090 15:38:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:17.090 15:38:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.090 15:38:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.090 15:38:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.090 ************************************ 00:05:17.090 START TEST default_locks 00:05:17.090 ************************************ 00:05:17.090 15:38:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:17.090 15:38:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1814881 00:05:17.090 15:38:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1814881 00:05:17.090 15:38:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.090 15:38:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1814881 ']' 00:05:17.090 15:38:12 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.090 15:38:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.090 15:38:12 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.090 15:38:12 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.090 15:38:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.090 [2024-12-09 15:38:12.219628] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:17.090 [2024-12-09 15:38:12.219670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814881 ] 00:05:17.090 [2024-12-09 15:38:12.291223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.349 [2024-12-09 15:38:12.333968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.349 15:38:12 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.349 15:38:12 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:17.349 15:38:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1814881 00:05:17.349 15:38:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1814881 00:05:17.349 15:38:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.917 lslocks: write error 00:05:17.917 15:38:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1814881 00:05:17.917 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1814881 ']' 00:05:17.917 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1814881 00:05:17.917 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:17.917 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.917 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1814881 00:05:17.917 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.917 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.917 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1814881' 00:05:17.917 killing process with pid 1814881 00:05:17.917 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1814881 00:05:17.917 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1814881 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1814881 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1814881 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1814881 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1814881 ']' 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1814881) - No such process 00:05:18.177 ERROR: process (pid: 1814881) is no longer running 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:18.177 15:38:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:18.437 15:38:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:18.437 15:38:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:18.437 15:38:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:18.437 00:05:18.437 real 0m1.238s 00:05:18.437 user 0m1.208s 00:05:18.437 sys 0m0.584s 00:05:18.437 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.437 15:38:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.437 ************************************ 00:05:18.437 END TEST default_locks 00:05:18.437 ************************************ 00:05:18.437 15:38:13 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:18.437 15:38:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.437 15:38:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.437 15:38:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.437 ************************************ 00:05:18.437 START TEST default_locks_via_rpc 00:05:18.437 ************************************ 00:05:18.437 15:38:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:18.437 15:38:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1815145 00:05:18.437 15:38:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1815145 00:05:18.437 15:38:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.437 15:38:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1815145 ']' 00:05:18.437 15:38:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.437 15:38:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.437 15:38:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.437 15:38:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.437 15:38:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.437 [2024-12-09 15:38:13.528282] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:18.437 [2024-12-09 15:38:13.528322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815145 ] 00:05:18.437 [2024-12-09 15:38:13.602775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.437 [2024-12-09 15:38:13.643397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.697 15:38:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.697 15:38:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:18.697 15:38:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:18.697 15:38:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.697 15:38:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.697 15:38:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.697 15:38:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:18.697 15:38:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:18.697 15:38:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:18.697 15:38:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:18.697 15:38:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:18.697 15:38:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.697 15:38:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.697 15:38:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.697 15:38:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1815145 00:05:18.697 15:38:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1815145 00:05:18.697 15:38:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.265 15:38:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1815145 00:05:19.265 15:38:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1815145 ']' 00:05:19.265 15:38:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1815145 00:05:19.265 15:38:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:19.265 15:38:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.265 15:38:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1815145 00:05:19.265 15:38:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.265 15:38:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.265 15:38:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1815145' 00:05:19.265 killing process with pid 1815145 00:05:19.265 15:38:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1815145 00:05:19.265 15:38:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1815145 00:05:19.525 00:05:19.525 real 0m1.213s 00:05:19.525 user 0m1.153s 00:05:19.525 sys 0m0.570s 00:05:19.525 15:38:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.525 15:38:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.525 ************************************ 00:05:19.525 END TEST default_locks_via_rpc 00:05:19.525 ************************************ 00:05:19.525 15:38:14 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:19.525 15:38:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.525 15:38:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.525 15:38:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.784 ************************************ 00:05:19.784 START TEST non_locking_app_on_locked_coremask 00:05:19.784 ************************************ 00:05:19.784 15:38:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:19.784 15:38:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1815397 00:05:19.784 15:38:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1815397 /var/tmp/spdk.sock 00:05:19.784 15:38:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.784 15:38:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1815397 ']' 00:05:19.784 15:38:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.784 15:38:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.784 15:38:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.784 15:38:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.784 15:38:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.784 [2024-12-09 15:38:14.813871] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:19.784 [2024-12-09 15:38:14.813914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815397 ] 00:05:19.785 [2024-12-09 15:38:14.886280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.785 [2024-12-09 15:38:14.927195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.047 15:38:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.047 15:38:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:20.047 15:38:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1815504 00:05:20.047 15:38:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1815504 /var/tmp/spdk2.sock 00:05:20.047 15:38:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:20.047 15:38:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1815504 ']' 00:05:20.047 15:38:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.047 15:38:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.047 15:38:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.047 15:38:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.047 15:38:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.047 [2024-12-09 15:38:15.201611] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:20.048 [2024-12-09 15:38:15.201658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815504 ] 00:05:20.309 [2024-12-09 15:38:15.287713] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.309 [2024-12-09 15:38:15.287740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.309 [2024-12-09 15:38:15.373906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.227 15:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.227 15:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:21.227 15:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1815397 00:05:21.227 15:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1815397 00:05:21.227 15:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.492 lslocks: write error 00:05:21.492 15:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1815397 00:05:21.492 15:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1815397 ']' 00:05:21.492 15:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1815397 00:05:21.492 15:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:21.492 15:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.492 15:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1815397 00:05:21.492 15:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.492 15:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.492 15:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1815397' 00:05:21.492 killing process with pid 1815397 00:05:21.492 15:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1815397 00:05:21.492 15:38:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1815397 00:05:22.061 15:38:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1815504 00:05:22.061 15:38:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1815504 ']' 00:05:22.061 15:38:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1815504 00:05:22.061 15:38:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:22.061 15:38:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.061 15:38:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1815504 00:05:22.061 15:38:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.061 15:38:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.061 15:38:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1815504' 00:05:22.061 killing process with pid 1815504 00:05:22.061 15:38:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1815504 00:05:22.061 15:38:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1815504 00:05:22.320 00:05:22.320 real 0m2.711s 00:05:22.320 user 0m2.856s 00:05:22.320 sys 0m0.920s 00:05:22.320 15:38:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.320 15:38:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.320 ************************************ 00:05:22.320 END TEST non_locking_app_on_locked_coremask 00:05:22.320 ************************************ 00:05:22.320 15:38:17 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:22.320 15:38:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.320 15:38:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.320 15:38:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.320 ************************************ 00:05:22.320 START TEST locking_app_on_unlocked_coremask 00:05:22.320 ************************************ 00:05:22.320 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:22.320 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1815891 00:05:22.320 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1815891 /var/tmp/spdk.sock 00:05:22.320 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:22.320 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1815891 ']' 00:05:22.320 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.320 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.320 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.320 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.320 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.580 [2024-12-09 15:38:17.596368] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:22.580 [2024-12-09 15:38:17.596416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815891 ] 00:05:22.580 [2024-12-09 15:38:17.671027] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.580 [2024-12-09 15:38:17.671055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.580 [2024-12-09 15:38:17.711506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.839 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.839 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:22.839 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1816014 00:05:22.839 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1816014 /var/tmp/spdk2.sock 00:05:22.839 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:22.839 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1816014 ']' 00:05:22.839 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.839 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.839 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.839 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.839 15:38:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.839 [2024-12-09 15:38:17.972537] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:22.839 [2024-12-09 15:38:17.972584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1816014 ] 00:05:22.839 [2024-12-09 15:38:18.065983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.098 [2024-12-09 15:38:18.146465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.666 15:38:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.666 15:38:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:23.666 15:38:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1816014 00:05:23.666 15:38:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1816014 00:05:23.666 15:38:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.234 lslocks: write error 00:05:24.234 15:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1815891 00:05:24.234 15:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1815891 ']' 00:05:24.234 15:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1815891 00:05:24.234 15:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:24.234 15:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.234 15:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1815891 00:05:24.234 15:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.234 15:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.234 15:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1815891' 00:05:24.234 killing process with pid 1815891 00:05:24.234 15:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1815891 00:05:24.234 15:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1815891 00:05:24.803 15:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1816014 00:05:24.803 15:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1816014 ']' 00:05:24.803 15:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1816014 00:05:24.803 15:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:24.803 15:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.803 15:38:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1816014 00:05:24.803 15:38:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.803 15:38:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.803 15:38:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1816014' 00:05:24.803 killing process with pid 1816014 00:05:24.803 15:38:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1816014 00:05:24.803 15:38:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1816014 00:05:25.373 00:05:25.373 real 0m2.780s 00:05:25.373 user 0m2.937s 00:05:25.373 sys 0m0.933s 00:05:25.373 15:38:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.373 15:38:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.373 ************************************ 00:05:25.373 END TEST locking_app_on_unlocked_coremask 00:05:25.373 ************************************ 00:05:25.373 15:38:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:25.373 15:38:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.373 15:38:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.373 15:38:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.373 ************************************ 00:05:25.373 START TEST locking_app_on_locked_coremask 00:05:25.373 ************************************ 00:05:25.373 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:25.373 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1816385 00:05:25.373 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1816385 /var/tmp/spdk.sock 00:05:25.373 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.373 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1816385 ']' 00:05:25.373 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.373 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.373 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.373 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.373 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.373 [2024-12-09 15:38:20.445446] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:25.373 [2024-12-09 15:38:20.445489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1816385 ] 00:05:25.373 [2024-12-09 15:38:20.521976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.373 [2024-12-09 15:38:20.562998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1816582 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1816582 /var/tmp/spdk2.sock 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1816582 /var/tmp/spdk2.sock 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1816582 /var/tmp/spdk2.sock 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1816582 ']' 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.633 15:38:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.633 [2024-12-09 15:38:20.820659] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:25.633 [2024-12-09 15:38:20.820706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1816582 ] 00:05:25.893 [2024-12-09 15:38:20.908076] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1816385 has claimed it. 00:05:25.893 [2024-12-09 15:38:20.908111] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:26.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1816582) - No such process 00:05:26.461 ERROR: process (pid: 1816582) is no longer running 00:05:26.461 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.461 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:26.461 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:26.461 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:26.461 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:26.461 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:26.461 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1816385 00:05:26.462 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1816385 00:05:26.462 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.462 lslocks: write error 00:05:26.462 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1816385 00:05:26.462 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1816385 ']' 00:05:26.462 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1816385 00:05:26.462 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:26.462 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.462 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1816385 00:05:26.721 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.721 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.721 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1816385' 00:05:26.721 killing process with pid 1816385 00:05:26.721 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1816385 00:05:26.721 15:38:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1816385 00:05:26.981 00:05:26.981 real 0m1.616s 00:05:26.981 user 0m1.717s 00:05:26.981 sys 0m0.540s 00:05:26.981 15:38:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.981 15:38:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.981 ************************************ 00:05:26.981 END TEST locking_app_on_locked_coremask 00:05:26.981 ************************************ 00:05:26.981 15:38:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:26.981 15:38:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.981 15:38:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.981 15:38:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.981 ************************************ 00:05:26.981 START TEST locking_overlapped_coremask 00:05:26.981 ************************************ 00:05:26.981 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:26.981 15:38:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1816793 00:05:26.981 15:38:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1816793 /var/tmp/spdk.sock 00:05:26.981 15:38:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:26.981 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1816793 ']' 00:05:26.981 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.981 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.981 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.981 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.981 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.981 [2024-12-09 15:38:22.129633] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:26.981 [2024-12-09 15:38:22.129677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1816793 ] 00:05:26.981 [2024-12-09 15:38:22.202789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:27.240 [2024-12-09 15:38:22.245938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.240 [2024-12-09 15:38:22.246048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.240 [2024-12-09 15:38:22.246049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1816866 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1816866 /var/tmp/spdk2.sock 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1816866 /var/tmp/spdk2.sock 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1816866 /var/tmp/spdk2.sock 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1816866 ']' 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.240 15:38:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.499 [2024-12-09 15:38:22.510150] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:27.499 [2024-12-09 15:38:22.510192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1816866 ] 00:05:27.499 [2024-12-09 15:38:22.599928] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1816793 has claimed it. 00:05:27.499 [2024-12-09 15:38:22.599964] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:28.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1816866) - No such process 00:05:28.066 ERROR: process (pid: 1816866) is no longer running 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1816793 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1816793 ']' 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1816793 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1816793 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1816793' 00:05:28.066 killing process with pid 1816793 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1816793 00:05:28.066 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1816793 00:05:28.326 00:05:28.326 real 0m1.423s 00:05:28.326 user 0m3.922s 00:05:28.326 sys 0m0.386s 00:05:28.326 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.326 15:38:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.326 ************************************ 00:05:28.326 END TEST locking_overlapped_coremask 00:05:28.326 ************************************ 00:05:28.326 15:38:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:28.326 15:38:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.326 15:38:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.326 15:38:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.585 ************************************ 00:05:28.585 START TEST locking_overlapped_coremask_via_rpc 00:05:28.585 ************************************ 00:05:28.585 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:28.585 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1817119 00:05:28.585 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1817119 /var/tmp/spdk.sock 00:05:28.585 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:28.585 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1817119 ']' 00:05:28.585 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.585 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.585 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.585 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.585 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.585 [2024-12-09 15:38:23.622865] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:28.585 [2024-12-09 15:38:23.622907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1817119 ] 00:05:28.585 [2024-12-09 15:38:23.698825] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.585 [2024-12-09 15:38:23.698853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.585 [2024-12-09 15:38:23.738989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.585 [2024-12-09 15:38:23.739102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.585 [2024-12-09 15:38:23.739102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.844 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.844 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:28.844 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:28.844 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1817137 00:05:28.844 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1817137 /var/tmp/spdk2.sock 00:05:28.844 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1817137 ']' 00:05:28.844 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.844 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.844 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.844 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.844 15:38:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.844 [2024-12-09 15:38:23.990661] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:28.844 [2024-12-09 15:38:23.990707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1817137 ] 00:05:29.102 [2024-12-09 15:38:24.081682] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.102 [2024-12-09 15:38:24.081713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.102 [2024-12-09 15:38:24.163285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.102 [2024-12-09 15:38:24.163403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.102 [2024-12-09 15:38:24.163404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.669 [2024-12-09 15:38:24.872289] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1817119 has claimed it. 00:05:29.669 request: 00:05:29.669 { 00:05:29.669 "method": "framework_enable_cpumask_locks", 00:05:29.669 "req_id": 1 00:05:29.669 } 00:05:29.669 Got JSON-RPC error response 00:05:29.669 response: 00:05:29.669 { 00:05:29.669 "code": -32603, 00:05:29.669 "message": "Failed to claim CPU core: 2" 00:05:29.669 } 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1817119 /var/tmp/spdk.sock 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1817119 ']' 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.669 15:38:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.927 15:38:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.927 15:38:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:29.927 15:38:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1817137 /var/tmp/spdk2.sock 00:05:29.928 15:38:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1817137 ']' 00:05:29.928 15:38:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.928 15:38:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.928 15:38:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.928 15:38:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.928 15:38:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.186 15:38:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.186 15:38:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.186 15:38:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:30.186 15:38:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:30.186 15:38:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:30.186 15:38:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:30.186 00:05:30.186 real 0m1.728s 00:05:30.186 user 0m0.835s 00:05:30.186 sys 0m0.147s 00:05:30.186 15:38:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.186 15:38:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.186 ************************************ 00:05:30.186 END TEST locking_overlapped_coremask_via_rpc 00:05:30.186 ************************************ 00:05:30.186 15:38:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:30.186 15:38:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1817119 ]] 00:05:30.186 15:38:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1817119 00:05:30.186 15:38:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1817119 ']' 00:05:30.186 15:38:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1817119 00:05:30.186 15:38:25 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:30.186 15:38:25 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.186 15:38:25 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1817119 00:05:30.186 15:38:25 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.186 15:38:25 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.186 15:38:25 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1817119' 00:05:30.186 killing process with pid 1817119 00:05:30.186 15:38:25 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1817119 00:05:30.186 15:38:25 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1817119 00:05:30.754 15:38:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1817137 ]] 00:05:30.754 15:38:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1817137 00:05:30.754 15:38:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1817137 ']' 00:05:30.754 15:38:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1817137 00:05:30.754 15:38:25 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:30.754 15:38:25 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.754 15:38:25 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1817137 00:05:30.754 15:38:25 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:30.754 15:38:25 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:30.754 15:38:25 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1817137' 00:05:30.754 killing process with pid 1817137 00:05:30.754 15:38:25 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1817137 00:05:30.754 15:38:25 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1817137 00:05:31.013 15:38:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:31.013 15:38:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:31.013 15:38:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1817119 ]] 00:05:31.013 15:38:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1817119 00:05:31.013 15:38:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1817119 ']' 00:05:31.013 15:38:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1817119 00:05:31.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1817119) - No such process 00:05:31.013 15:38:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1817119 is not found' 00:05:31.013 Process with pid 1817119 is not found 00:05:31.013 15:38:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1817137 ]] 00:05:31.013 15:38:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1817137 00:05:31.013 15:38:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1817137 ']' 00:05:31.013 15:38:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1817137 00:05:31.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1817137) - No such process 00:05:31.013 15:38:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1817137 is not found' 00:05:31.013 Process with pid 1817137 is not found 00:05:31.013 15:38:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:31.013 00:05:31.013 real 0m14.098s 00:05:31.013 user 0m24.515s 00:05:31.013 sys 0m5.036s 00:05:31.013 15:38:26 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.013 15:38:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.013 ************************************ 00:05:31.013 END TEST cpu_locks 00:05:31.013 ************************************ 00:05:31.013 00:05:31.013 real 0m39.134s 00:05:31.013 user 1m15.069s 00:05:31.014 sys 0m8.537s 00:05:31.014 15:38:26 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.014 15:38:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.014 ************************************ 00:05:31.014 END TEST event 00:05:31.014 ************************************ 00:05:31.014 15:38:26 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:31.014 15:38:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.014 15:38:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.014 15:38:26 -- common/autotest_common.sh@10 -- # set +x 00:05:31.014 ************************************ 00:05:31.014 START TEST thread 00:05:31.014 ************************************ 00:05:31.014 15:38:26 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:31.273 * Looking for test storage... 00:05:31.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:31.273 15:38:26 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.273 15:38:26 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.273 15:38:26 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.273 15:38:26 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.273 15:38:26 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.273 15:38:26 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.273 15:38:26 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.273 15:38:26 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.273 15:38:26 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.273 15:38:26 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.273 15:38:26 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.273 15:38:26 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.273 15:38:26 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.273 15:38:26 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.273 15:38:26 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.273 15:38:26 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:31.273 15:38:26 thread -- scripts/common.sh@345 -- # : 1 00:05:31.273 15:38:26 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.273 15:38:26 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.273 15:38:26 thread -- scripts/common.sh@365 -- # decimal 1 00:05:31.273 15:38:26 thread -- scripts/common.sh@353 -- # local d=1 00:05:31.273 15:38:26 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.273 15:38:26 thread -- scripts/common.sh@355 -- # echo 1 00:05:31.273 15:38:26 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.273 15:38:26 thread -- scripts/common.sh@366 -- # decimal 2 00:05:31.273 15:38:26 thread -- scripts/common.sh@353 -- # local d=2 00:05:31.273 15:38:26 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.273 15:38:26 thread -- scripts/common.sh@355 -- # echo 2 00:05:31.273 15:38:26 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.273 15:38:26 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.273 15:38:26 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.273 15:38:26 thread -- scripts/common.sh@368 -- # return 0 00:05:31.273 15:38:26 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.273 15:38:26 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.273 --rc genhtml_branch_coverage=1 00:05:31.273 --rc genhtml_function_coverage=1 00:05:31.273 --rc genhtml_legend=1 00:05:31.273 --rc geninfo_all_blocks=1 00:05:31.273 --rc geninfo_unexecuted_blocks=1 00:05:31.273 00:05:31.273 ' 00:05:31.273 15:38:26 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.273 --rc genhtml_branch_coverage=1 00:05:31.273 --rc genhtml_function_coverage=1 00:05:31.273 --rc genhtml_legend=1 00:05:31.273 --rc geninfo_all_blocks=1 00:05:31.273 --rc geninfo_unexecuted_blocks=1 00:05:31.273 00:05:31.273 ' 00:05:31.273 15:38:26 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.273 --rc genhtml_branch_coverage=1 00:05:31.274 --rc genhtml_function_coverage=1 00:05:31.274 --rc genhtml_legend=1 00:05:31.274 --rc geninfo_all_blocks=1 00:05:31.274 --rc geninfo_unexecuted_blocks=1 00:05:31.274 00:05:31.274 ' 00:05:31.274 15:38:26 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.274 --rc genhtml_branch_coverage=1 00:05:31.274 --rc genhtml_function_coverage=1 00:05:31.274 --rc genhtml_legend=1 00:05:31.274 --rc geninfo_all_blocks=1 00:05:31.274 --rc geninfo_unexecuted_blocks=1 00:05:31.274 00:05:31.274 ' 00:05:31.274 15:38:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:31.274 15:38:26 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:31.274 15:38:26 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.274 15:38:26 thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.274 ************************************ 00:05:31.274 START TEST thread_poller_perf 00:05:31.274 ************************************ 00:05:31.274 15:38:26 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:31.274 [2024-12-09 15:38:26.401088] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:31.274 [2024-12-09 15:38:26.401158] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1817691 ] 00:05:31.274 [2024-12-09 15:38:26.479335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.532 [2024-12-09 15:38:26.518518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.532 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:32.468 [2024-12-09T14:38:27.696Z] ====================================== 00:05:32.469 [2024-12-09T14:38:27.697Z] busy:2108117738 (cyc) 00:05:32.469 [2024-12-09T14:38:27.697Z] total_run_count: 417000 00:05:32.469 [2024-12-09T14:38:27.697Z] tsc_hz: 2100000000 (cyc) 00:05:32.469 [2024-12-09T14:38:27.697Z] ====================================== 00:05:32.469 [2024-12-09T14:38:27.697Z] poller_cost: 5055 (cyc), 2407 (nsec) 00:05:32.469 00:05:32.469 real 0m1.185s 00:05:32.469 user 0m1.098s 00:05:32.469 sys 0m0.084s 00:05:32.469 15:38:27 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.469 15:38:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.469 ************************************ 00:05:32.469 END TEST thread_poller_perf 00:05:32.469 ************************************ 00:05:32.469 15:38:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:32.469 15:38:27 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:32.469 15:38:27 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.469 15:38:27 thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.469 ************************************ 00:05:32.469 START TEST thread_poller_perf 00:05:32.469 ************************************ 00:05:32.469 15:38:27 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:32.469 [2024-12-09 15:38:27.658562] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:32.469 [2024-12-09 15:38:27.658634] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1817911 ] 00:05:32.728 [2024-12-09 15:38:27.717157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.728 [2024-12-09 15:38:27.755356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.728 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:33.665 [2024-12-09T14:38:28.893Z] ====================================== 00:05:33.665 [2024-12-09T14:38:28.893Z] busy:2101286122 (cyc) 00:05:33.665 [2024-12-09T14:38:28.893Z] total_run_count: 5244000 00:05:33.665 [2024-12-09T14:38:28.893Z] tsc_hz: 2100000000 (cyc) 00:05:33.665 [2024-12-09T14:38:28.893Z] ====================================== 00:05:33.665 [2024-12-09T14:38:28.893Z] poller_cost: 400 (cyc), 190 (nsec) 00:05:33.665 00:05:33.665 real 0m1.155s 00:05:33.665 user 0m1.084s 00:05:33.665 sys 0m0.067s 00:05:33.665 15:38:28 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.665 15:38:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.665 ************************************ 00:05:33.665 END TEST thread_poller_perf 00:05:33.665 ************************************ 00:05:33.665 15:38:28 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:33.665 00:05:33.665 real 0m2.662s 00:05:33.665 user 0m2.342s 00:05:33.665 sys 0m0.333s 00:05:33.665 15:38:28 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.665 15:38:28 thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.665 ************************************ 00:05:33.665 END TEST thread 00:05:33.665 ************************************ 00:05:33.665 15:38:28 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:33.665 15:38:28 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:33.665 15:38:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.665 15:38:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.665 15:38:28 -- common/autotest_common.sh@10 -- # set +x 00:05:33.925 ************************************ 00:05:33.925 START TEST app_cmdline 00:05:33.925 ************************************ 00:05:33.925 15:38:28 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:33.925 * Looking for test storage... 00:05:33.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:33.925 15:38:28 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:33.925 15:38:28 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:33.925 15:38:28 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.925 15:38:29 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.925 15:38:29 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:33.925 15:38:29 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.925 15:38:29 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.925 --rc genhtml_branch_coverage=1 00:05:33.925 --rc genhtml_function_coverage=1 00:05:33.925 --rc genhtml_legend=1 00:05:33.925 --rc geninfo_all_blocks=1 00:05:33.925 --rc geninfo_unexecuted_blocks=1 00:05:33.925 00:05:33.925 ' 00:05:33.925 15:38:29 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.925 --rc genhtml_branch_coverage=1 00:05:33.925 --rc genhtml_function_coverage=1 00:05:33.925 --rc genhtml_legend=1 00:05:33.925 --rc geninfo_all_blocks=1 00:05:33.925 --rc geninfo_unexecuted_blocks=1 00:05:33.925 00:05:33.925 ' 00:05:33.925 15:38:29 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.925 --rc genhtml_branch_coverage=1 00:05:33.925 --rc genhtml_function_coverage=1 00:05:33.925 --rc genhtml_legend=1 00:05:33.925 --rc geninfo_all_blocks=1 00:05:33.925 --rc geninfo_unexecuted_blocks=1 00:05:33.925 00:05:33.925 ' 00:05:33.925 15:38:29 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.925 --rc genhtml_branch_coverage=1 00:05:33.925 --rc genhtml_function_coverage=1 00:05:33.925 --rc genhtml_legend=1 00:05:33.925 --rc geninfo_all_blocks=1 00:05:33.925 --rc geninfo_unexecuted_blocks=1 00:05:33.925 00:05:33.925 ' 00:05:33.925 15:38:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:33.925 15:38:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1818233 00:05:33.925 15:38:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1818233 00:05:33.925 15:38:29 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:33.925 15:38:29 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1818233 ']' 00:05:33.925 15:38:29 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.925 15:38:29 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.925 15:38:29 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.925 15:38:29 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.925 15:38:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:33.925 [2024-12-09 15:38:29.132714] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:33.925 [2024-12-09 15:38:29.132762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1818233 ] 00:05:34.184 [2024-12-09 15:38:29.205360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.184 [2024-12-09 15:38:29.243456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.443 15:38:29 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.443 15:38:29 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:34.443 15:38:29 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:34.443 { 00:05:34.443 "version": "SPDK v25.01-pre git sha1 b8248e28c", 00:05:34.443 "fields": { 00:05:34.443 "major": 25, 00:05:34.443 "minor": 1, 00:05:34.443 "patch": 0, 00:05:34.443 "suffix": "-pre", 00:05:34.443 "commit": "b8248e28c" 00:05:34.443 } 00:05:34.443 } 00:05:34.443 15:38:29 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:34.443 15:38:29 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:34.443 15:38:29 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:34.443 15:38:29 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:34.443 15:38:29 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:34.443 15:38:29 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:34.443 15:38:29 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:34.443 15:38:29 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.443 15:38:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:34.443 15:38:29 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.702 15:38:29 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:34.702 15:38:29 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:34.702 15:38:29 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:34.702 request: 00:05:34.702 { 00:05:34.702 "method": "env_dpdk_get_mem_stats", 00:05:34.702 "req_id": 1 00:05:34.702 } 00:05:34.702 Got JSON-RPC error response 00:05:34.702 response: 00:05:34.702 { 00:05:34.702 "code": -32601, 00:05:34.702 "message": "Method not found" 00:05:34.702 } 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.702 15:38:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1818233 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1818233 ']' 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1818233 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.702 15:38:29 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1818233 00:05:34.961 15:38:29 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.961 15:38:29 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.961 15:38:29 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1818233' 00:05:34.961 killing process with pid 1818233 00:05:34.961 15:38:29 app_cmdline -- common/autotest_common.sh@973 -- # kill 1818233 00:05:34.961 15:38:29 app_cmdline -- common/autotest_common.sh@978 -- # wait 1818233 00:05:35.220 00:05:35.220 real 0m1.343s 00:05:35.220 user 0m1.573s 00:05:35.220 sys 0m0.437s 00:05:35.220 15:38:30 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.220 15:38:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:35.220 ************************************ 00:05:35.220 END TEST app_cmdline 00:05:35.220 ************************************ 00:05:35.220 15:38:30 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:35.220 15:38:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.220 15:38:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.220 15:38:30 -- common/autotest_common.sh@10 -- # set +x 00:05:35.220 ************************************ 00:05:35.220 START TEST version 00:05:35.220 ************************************ 00:05:35.220 15:38:30 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:35.220 * Looking for test storage... 00:05:35.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:35.220 15:38:30 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:35.220 15:38:30 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:35.220 15:38:30 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:35.480 15:38:30 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:35.480 15:38:30 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.480 15:38:30 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.480 15:38:30 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.480 15:38:30 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.480 15:38:30 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.480 15:38:30 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.480 15:38:30 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.480 15:38:30 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.480 15:38:30 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.480 15:38:30 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.480 15:38:30 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.480 15:38:30 version -- scripts/common.sh@344 -- # case "$op" in 00:05:35.480 15:38:30 version -- scripts/common.sh@345 -- # : 1 00:05:35.480 15:38:30 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.480 15:38:30 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.480 15:38:30 version -- scripts/common.sh@365 -- # decimal 1 00:05:35.480 15:38:30 version -- scripts/common.sh@353 -- # local d=1 00:05:35.480 15:38:30 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.480 15:38:30 version -- scripts/common.sh@355 -- # echo 1 00:05:35.480 15:38:30 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.480 15:38:30 version -- scripts/common.sh@366 -- # decimal 2 00:05:35.480 15:38:30 version -- scripts/common.sh@353 -- # local d=2 00:05:35.480 15:38:30 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.480 15:38:30 version -- scripts/common.sh@355 -- # echo 2 00:05:35.480 15:38:30 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.480 15:38:30 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.480 15:38:30 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.480 15:38:30 version -- scripts/common.sh@368 -- # return 0 00:05:35.480 15:38:30 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.480 15:38:30 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:35.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.480 --rc genhtml_branch_coverage=1 00:05:35.480 --rc genhtml_function_coverage=1 00:05:35.480 --rc genhtml_legend=1 00:05:35.480 --rc geninfo_all_blocks=1 00:05:35.480 --rc geninfo_unexecuted_blocks=1 00:05:35.480 00:05:35.480 ' 00:05:35.480 15:38:30 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:35.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.480 --rc genhtml_branch_coverage=1 00:05:35.480 --rc genhtml_function_coverage=1 00:05:35.480 --rc genhtml_legend=1 00:05:35.480 --rc geninfo_all_blocks=1 00:05:35.480 --rc geninfo_unexecuted_blocks=1 00:05:35.480 00:05:35.480 ' 00:05:35.480 15:38:30 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:35.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.480 --rc genhtml_branch_coverage=1 00:05:35.480 --rc genhtml_function_coverage=1 00:05:35.480 --rc genhtml_legend=1 00:05:35.480 --rc geninfo_all_blocks=1 00:05:35.480 --rc geninfo_unexecuted_blocks=1 00:05:35.480 00:05:35.480 ' 00:05:35.480 15:38:30 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:35.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.480 --rc genhtml_branch_coverage=1 00:05:35.480 --rc genhtml_function_coverage=1 00:05:35.480 --rc genhtml_legend=1 00:05:35.480 --rc geninfo_all_blocks=1 00:05:35.480 --rc geninfo_unexecuted_blocks=1 00:05:35.480 00:05:35.480 ' 00:05:35.480 15:38:30 version -- app/version.sh@17 -- # get_header_version major 00:05:35.480 15:38:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:35.480 15:38:30 version -- app/version.sh@14 -- # cut -f2 00:05:35.480 15:38:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:35.480 15:38:30 version -- app/version.sh@17 -- # major=25 00:05:35.480 15:38:30 version -- app/version.sh@18 -- # get_header_version minor 00:05:35.480 15:38:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:35.480 15:38:30 version -- app/version.sh@14 -- # cut -f2 00:05:35.480 15:38:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:35.480 15:38:30 version -- app/version.sh@18 -- # minor=1 00:05:35.480 15:38:30 version -- app/version.sh@19 -- # get_header_version patch 00:05:35.480 15:38:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:35.480 15:38:30 version -- app/version.sh@14 -- # cut -f2 00:05:35.480 15:38:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:35.480 15:38:30 version -- app/version.sh@19 -- # patch=0 00:05:35.480 15:38:30 version -- app/version.sh@20 -- # get_header_version suffix 00:05:35.480 15:38:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:35.480 15:38:30 version -- app/version.sh@14 -- # cut -f2 00:05:35.480 15:38:30 version -- app/version.sh@14 -- # tr -d '"' 00:05:35.480 15:38:30 version -- app/version.sh@20 -- # suffix=-pre 00:05:35.480 15:38:30 version -- app/version.sh@22 -- # version=25.1 00:05:35.480 15:38:30 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:35.480 15:38:30 version -- app/version.sh@28 -- # version=25.1rc0 00:05:35.480 15:38:30 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:35.480 15:38:30 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:35.480 15:38:30 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:35.480 15:38:30 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:35.480 00:05:35.480 real 0m0.247s 00:05:35.480 user 0m0.152s 00:05:35.480 sys 0m0.138s 00:05:35.480 15:38:30 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.480 15:38:30 version -- common/autotest_common.sh@10 -- # set +x 00:05:35.480 ************************************ 00:05:35.480 END TEST version 00:05:35.480 ************************************ 00:05:35.480 15:38:30 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:35.480 15:38:30 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:35.480 15:38:30 -- spdk/autotest.sh@194 -- # uname -s 00:05:35.480 15:38:30 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:35.480 15:38:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:35.480 15:38:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:35.480 15:38:30 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:35.480 15:38:30 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:35.480 15:38:30 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:35.480 15:38:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.480 15:38:30 -- common/autotest_common.sh@10 -- # set +x 00:05:35.480 15:38:30 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:35.480 15:38:30 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:35.480 15:38:30 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:35.480 15:38:30 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:35.480 15:38:30 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:35.480 15:38:30 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:35.480 15:38:30 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:35.480 15:38:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:35.480 15:38:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.480 15:38:30 -- common/autotest_common.sh@10 -- # set +x 00:05:35.480 ************************************ 00:05:35.480 START TEST nvmf_tcp 00:05:35.480 ************************************ 00:05:35.480 15:38:30 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:35.740 * Looking for test storage... 00:05:35.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:35.740 15:38:30 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:35.740 15:38:30 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:35.740 15:38:30 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:35.740 15:38:30 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:35.740 15:38:30 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.741 15:38:30 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:35.741 15:38:30 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.741 15:38:30 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:35.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.741 --rc genhtml_branch_coverage=1 00:05:35.741 --rc genhtml_function_coverage=1 00:05:35.741 --rc genhtml_legend=1 00:05:35.741 --rc geninfo_all_blocks=1 00:05:35.741 --rc geninfo_unexecuted_blocks=1 00:05:35.741 00:05:35.741 ' 00:05:35.741 15:38:30 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:35.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.741 --rc genhtml_branch_coverage=1 00:05:35.741 --rc genhtml_function_coverage=1 00:05:35.741 --rc genhtml_legend=1 00:05:35.741 --rc geninfo_all_blocks=1 00:05:35.741 --rc geninfo_unexecuted_blocks=1 00:05:35.741 00:05:35.741 ' 00:05:35.741 15:38:30 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:35.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.741 --rc genhtml_branch_coverage=1 00:05:35.741 --rc genhtml_function_coverage=1 00:05:35.741 --rc genhtml_legend=1 00:05:35.741 --rc geninfo_all_blocks=1 00:05:35.741 --rc geninfo_unexecuted_blocks=1 00:05:35.741 00:05:35.741 ' 00:05:35.741 15:38:30 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:35.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.741 --rc genhtml_branch_coverage=1 00:05:35.741 --rc genhtml_function_coverage=1 00:05:35.741 --rc genhtml_legend=1 00:05:35.741 --rc geninfo_all_blocks=1 00:05:35.741 --rc geninfo_unexecuted_blocks=1 00:05:35.741 00:05:35.741 ' 00:05:35.741 15:38:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:35.741 15:38:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:35.741 15:38:30 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:35.741 15:38:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:35.741 15:38:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.741 15:38:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.741 ************************************ 00:05:35.741 START TEST nvmf_target_core 00:05:35.741 ************************************ 00:05:35.741 15:38:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:35.741 * Looking for test storage... 00:05:36.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:36.002 15:38:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.002 15:38:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.002 15:38:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.002 --rc genhtml_branch_coverage=1 00:05:36.002 --rc genhtml_function_coverage=1 00:05:36.002 --rc genhtml_legend=1 00:05:36.002 --rc geninfo_all_blocks=1 00:05:36.002 --rc geninfo_unexecuted_blocks=1 00:05:36.002 00:05:36.002 ' 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.002 --rc genhtml_branch_coverage=1 00:05:36.002 --rc genhtml_function_coverage=1 00:05:36.002 --rc genhtml_legend=1 00:05:36.002 --rc geninfo_all_blocks=1 00:05:36.002 --rc geninfo_unexecuted_blocks=1 00:05:36.002 00:05:36.002 ' 00:05:36.002 15:38:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.002 --rc genhtml_branch_coverage=1 00:05:36.002 --rc genhtml_function_coverage=1 00:05:36.002 --rc genhtml_legend=1 00:05:36.002 --rc geninfo_all_blocks=1 00:05:36.003 --rc geninfo_unexecuted_blocks=1 00:05:36.003 00:05:36.003 ' 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.003 --rc genhtml_branch_coverage=1 00:05:36.003 --rc genhtml_function_coverage=1 00:05:36.003 --rc genhtml_legend=1 00:05:36.003 --rc geninfo_all_blocks=1 00:05:36.003 --rc geninfo_unexecuted_blocks=1 00:05:36.003 00:05:36.003 ' 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:36.003 ************************************ 00:05:36.003 START TEST nvmf_abort 00:05:36.003 ************************************ 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:36.003 * Looking for test storage... 00:05:36.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.003 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.264 --rc genhtml_branch_coverage=1 00:05:36.264 --rc genhtml_function_coverage=1 00:05:36.264 --rc genhtml_legend=1 00:05:36.264 --rc geninfo_all_blocks=1 00:05:36.264 --rc geninfo_unexecuted_blocks=1 00:05:36.264 00:05:36.264 ' 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.264 --rc genhtml_branch_coverage=1 00:05:36.264 --rc genhtml_function_coverage=1 00:05:36.264 --rc genhtml_legend=1 00:05:36.264 --rc geninfo_all_blocks=1 00:05:36.264 --rc geninfo_unexecuted_blocks=1 00:05:36.264 00:05:36.264 ' 00:05:36.264 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.264 --rc genhtml_branch_coverage=1 00:05:36.264 --rc genhtml_function_coverage=1 00:05:36.264 --rc genhtml_legend=1 00:05:36.264 --rc geninfo_all_blocks=1 00:05:36.264 --rc geninfo_unexecuted_blocks=1 00:05:36.264 00:05:36.264 ' 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.265 --rc genhtml_branch_coverage=1 00:05:36.265 --rc genhtml_function_coverage=1 00:05:36.265 --rc genhtml_legend=1 00:05:36.265 --rc geninfo_all_blocks=1 00:05:36.265 --rc geninfo_unexecuted_blocks=1 00:05:36.265 00:05:36.265 ' 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:36.265 15:38:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:42.847 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:42.847 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:42.848 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:42.848 Found net devices under 0000:af:00.0: cvl_0_0 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:42.848 Found net devices under 0000:af:00.1: cvl_0_1 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:42.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:42.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:05:42.848 00:05:42.848 --- 10.0.0.2 ping statistics --- 00:05:42.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:42.848 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:42.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:42.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:05:42.848 00:05:42.848 --- 10.0.0.1 ping statistics --- 00:05:42.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:42.848 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1821801 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1821801 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1821801 ']' 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.848 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.848 [2024-12-09 15:38:37.402592] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:42.848 [2024-12-09 15:38:37.402640] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:42.848 [2024-12-09 15:38:37.482629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.848 [2024-12-09 15:38:37.524298] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:42.848 [2024-12-09 15:38:37.524334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:42.848 [2024-12-09 15:38:37.524342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:42.849 [2024-12-09 15:38:37.524348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:42.849 [2024-12-09 15:38:37.524353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:42.849 [2024-12-09 15:38:37.525751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.849 [2024-12-09 15:38:37.525859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.849 [2024-12-09 15:38:37.525860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.849 [2024-12-09 15:38:37.662666] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.849 Malloc0 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.849 Delay0 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.849 [2024-12-09 15:38:37.745975] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.849 15:38:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:42.849 [2024-12-09 15:38:37.925363] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:45.374 Initializing NVMe Controllers 00:05:45.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:45.374 controller IO queue size 128 less than required 00:05:45.374 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:45.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:45.374 Initialization complete. Launching workers. 00:05:45.374 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37592 00:05:45.374 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37653, failed to submit 62 00:05:45.374 success 37596, unsuccessful 57, failed 0 00:05:45.374 15:38:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:45.374 15:38:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.374 15:38:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:45.374 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.374 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:45.374 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:45.374 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:45.374 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:45.374 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:45.375 rmmod nvme_tcp 00:05:45.375 rmmod nvme_fabrics 00:05:45.375 rmmod nvme_keyring 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1821801 ']' 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1821801 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1821801 ']' 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1821801 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1821801 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1821801' 00:05:45.375 killing process with pid 1821801 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1821801 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1821801 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:45.375 15:38:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:47.283 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:47.283 00:05:47.283 real 0m11.261s 00:05:47.283 user 0m11.776s 00:05:47.283 sys 0m5.474s 00:05:47.283 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.283 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:47.283 ************************************ 00:05:47.283 END TEST nvmf_abort 00:05:47.283 ************************************ 00:05:47.283 15:38:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:47.283 15:38:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:47.283 15:38:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.283 15:38:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:47.283 ************************************ 00:05:47.283 START TEST nvmf_ns_hotplug_stress 00:05:47.283 ************************************ 00:05:47.283 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:47.544 * Looking for test storage... 00:05:47.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:47.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.544 --rc genhtml_branch_coverage=1 00:05:47.544 --rc genhtml_function_coverage=1 00:05:47.544 --rc genhtml_legend=1 00:05:47.544 --rc geninfo_all_blocks=1 00:05:47.544 --rc geninfo_unexecuted_blocks=1 00:05:47.544 00:05:47.544 ' 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:47.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.544 --rc genhtml_branch_coverage=1 00:05:47.544 --rc genhtml_function_coverage=1 00:05:47.544 --rc genhtml_legend=1 00:05:47.544 --rc geninfo_all_blocks=1 00:05:47.544 --rc geninfo_unexecuted_blocks=1 00:05:47.544 00:05:47.544 ' 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:47.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.544 --rc genhtml_branch_coverage=1 00:05:47.544 --rc genhtml_function_coverage=1 00:05:47.544 --rc genhtml_legend=1 00:05:47.544 --rc geninfo_all_blocks=1 00:05:47.544 --rc geninfo_unexecuted_blocks=1 00:05:47.544 00:05:47.544 ' 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:47.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.544 --rc genhtml_branch_coverage=1 00:05:47.544 --rc genhtml_function_coverage=1 00:05:47.544 --rc genhtml_legend=1 00:05:47.544 --rc geninfo_all_blocks=1 00:05:47.544 --rc geninfo_unexecuted_blocks=1 00:05:47.544 00:05:47.544 ' 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.544 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:47.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:47.545 15:38:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:54.118 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:54.118 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:54.118 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:54.118 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:54.118 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:54.118 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:54.118 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:54.118 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:54.118 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:54.118 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:05:54.119 Found 0000:af:00.0 (0x8086 - 0x159b) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:05:54.119 Found 0000:af:00.1 (0x8086 - 0x159b) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:05:54.119 Found net devices under 0000:af:00.0: cvl_0_0 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:05:54.119 Found net devices under 0000:af:00.1: cvl_0_1 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:54.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:54.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:05:54.119 00:05:54.119 --- 10.0.0.2 ping statistics --- 00:05:54.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:54.119 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:54.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:54.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:05:54.119 00:05:54.119 --- 10.0.0.1 ping statistics --- 00:05:54.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:54.119 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1825862 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1825862 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1825862 ']' 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:54.119 [2024-12-09 15:38:48.751121] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:05:54.119 [2024-12-09 15:38:48.751163] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:54.119 [2024-12-09 15:38:48.831976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.119 [2024-12-09 15:38:48.871956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:54.119 [2024-12-09 15:38:48.871990] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:54.119 [2024-12-09 15:38:48.871997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:54.119 [2024-12-09 15:38:48.872003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:54.119 [2024-12-09 15:38:48.872008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:54.119 [2024-12-09 15:38:48.873320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.119 [2024-12-09 15:38:48.873424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.119 [2024-12-09 15:38:48.873425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:54.119 15:38:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:54.119 15:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:54.119 15:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:54.119 15:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:54.119 [2024-12-09 15:38:49.170732] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.119 15:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:54.376 15:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:54.376 [2024-12-09 15:38:49.556071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:54.376 15:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:54.633 15:38:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:54.889 Malloc0 00:05:54.889 15:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:55.145 Delay0 00:05:55.145 15:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.401 15:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:55.402 NULL1 00:05:55.402 15:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:55.658 15:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1826126 00:05:55.658 15:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:55.658 15:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:05:55.658 15:38:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.027 Read completed with error (sct=0, sc=11) 00:05:57.027 15:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.027 15:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:57.027 15:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:57.283 true 00:05:57.283 15:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:05:57.283 15:38:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.213 15:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.213 15:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:58.213 15:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:58.469 true 00:05:58.469 15:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:05:58.469 15:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.726 15:38:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.982 15:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:58.982 15:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:59.239 true 00:05:59.239 15:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:05:59.239 15:38:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.169 15:38:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.426 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.426 15:38:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:00.426 15:38:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:00.682 true 00:06:00.682 15:38:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:00.682 15:38:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.612 15:38:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.612 15:38:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:01.612 15:38:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:01.868 true 00:06:01.868 15:38:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:01.869 15:38:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.125 15:38:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.125 15:38:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:02.125 15:38:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:02.382 true 00:06:02.382 15:38:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:02.382 15:38:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.750 15:38:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.750 15:38:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:03.750 15:38:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:04.007 true 00:06:04.007 15:38:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:04.007 15:38:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.936 15:38:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.936 15:39:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:04.936 15:39:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:05.193 true 00:06:05.193 15:39:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:05.193 15:39:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.449 15:39:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.706 15:39:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:05.706 15:39:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:05.706 true 00:06:05.706 15:39:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:05.706 15:39:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.076 15:39:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.076 15:39:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:07.076 15:39:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:07.332 true 00:06:07.332 15:39:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:07.332 15:39:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.262 15:39:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.262 15:39:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:08.262 15:39:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:08.518 true 00:06:08.518 15:39:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:08.518 15:39:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.775 15:39:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.031 15:39:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:09.031 15:39:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:09.031 true 00:06:09.310 15:39:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:09.310 15:39:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.353 15:39:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.353 15:39:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:10.353 15:39:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:10.620 true 00:06:10.620 15:39:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:10.620 15:39:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.549 15:39:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.549 15:39:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:11.549 15:39:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:11.805 true 00:06:11.805 15:39:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:11.805 15:39:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.062 15:39:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.318 15:39:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:12.318 15:39:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:12.318 true 00:06:12.318 15:39:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:12.318 15:39:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.687 15:39:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.687 15:39:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:13.687 15:39:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:13.943 true 00:06:13.943 15:39:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:13.943 15:39:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.872 15:39:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.129 15:39:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:15.129 15:39:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:15.129 true 00:06:15.129 15:39:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:15.129 15:39:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.385 15:39:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.642 15:39:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:15.642 15:39:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:15.898 true 00:06:15.898 15:39:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:15.898 15:39:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.827 15:39:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.084 15:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:17.084 15:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:17.084 true 00:06:17.084 15:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:17.084 15:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.340 15:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.597 15:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:17.597 15:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:17.853 true 00:06:17.853 15:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:17.853 15:39:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.782 15:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.039 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.039 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.039 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.039 15:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:19.039 15:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:19.295 true 00:06:19.295 15:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:19.295 15:39:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.223 15:39:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.223 15:39:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:20.223 15:39:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:20.479 true 00:06:20.480 15:39:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:20.480 15:39:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.736 15:39:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.993 15:39:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:20.993 15:39:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:21.249 true 00:06:21.249 15:39:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:21.249 15:39:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.178 15:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.435 15:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:22.435 15:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:22.435 true 00:06:22.691 15:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:22.691 15:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.691 15:39:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.948 15:39:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:22.948 15:39:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:23.204 true 00:06:23.204 15:39:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:23.204 15:39:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.572 15:39:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.572 15:39:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:24.572 15:39:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:24.829 true 00:06:24.829 15:39:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:24.829 15:39:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.758 15:39:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.758 15:39:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:25.758 15:39:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:26.014 true 00:06:26.014 15:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:26.014 15:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.014 Initializing NVMe Controllers 00:06:26.014 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:26.014 Controller IO queue size 128, less than required. 00:06:26.014 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:26.014 Controller IO queue size 128, less than required. 00:06:26.014 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:26.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:26.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:26.014 Initialization complete. Launching workers. 00:06:26.014 ======================================================== 00:06:26.014 Latency(us) 00:06:26.014 Device Information : IOPS MiB/s Average min max 00:06:26.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1969.37 0.96 44792.33 2555.21 1023108.13 00:06:26.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18116.39 8.85 7065.63 1298.53 443517.84 00:06:26.014 ======================================================== 00:06:26.014 Total : 20085.76 9.81 10764.66 1298.53 1023108.13 00:06:26.014 00:06:26.271 15:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.271 15:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:26.271 15:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:26.526 true 00:06:26.526 15:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1826126 00:06:26.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1826126) - No such process 00:06:26.527 15:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1826126 00:06:26.527 15:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.783 15:39:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:27.039 15:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:27.039 15:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:27.039 15:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:27.039 15:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.039 15:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:27.039 null0 00:06:27.039 15:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.039 15:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.039 15:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:27.295 null1 00:06:27.295 15:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.295 15:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.295 15:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:27.552 null2 00:06:27.552 15:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.552 15:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.552 15:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:27.809 null3 00:06:27.809 15:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.809 15:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.809 15:39:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:27.809 null4 00:06:27.809 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.809 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.809 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:28.066 null5 00:06:28.066 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:28.066 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:28.066 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:28.322 null6 00:06:28.322 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:28.322 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:28.322 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:28.579 null7 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.579 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1832172 1832173 1832175 1832177 1832179 1832181 1832183 1832186 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.580 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:28.837 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:28.837 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.837 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:28.837 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:28.837 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:28.837 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:28.837 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:28.837 15:39:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:28.837 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.837 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.837 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:28.837 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.837 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.837 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:28.837 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.837 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.837 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:28.837 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.837 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.837 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:28.837 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.837 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.837 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:28.837 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.838 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.838 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.094 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.094 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.094 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.094 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.094 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.094 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.094 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.094 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:29.094 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:29.094 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.094 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:29.094 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.094 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:29.094 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.350 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.350 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.350 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.350 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.350 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.350 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.350 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.350 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.350 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.350 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.350 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.350 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.350 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.350 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.350 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.350 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.350 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.351 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.351 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.351 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.351 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.351 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.351 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.351 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.606 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.606 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.607 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.607 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:29.607 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:29.607 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:29.607 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.607 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.863 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.864 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.864 15:39:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.864 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.120 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:30.121 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.121 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.121 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:30.121 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.121 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.121 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:30.121 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.121 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.121 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:30.121 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.121 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.121 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:30.377 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.377 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.377 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.377 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:30.377 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:30.377 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.377 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:30.377 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.632 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:30.633 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:30.889 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.889 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.889 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.889 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.889 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:30.889 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:30.889 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:30.889 15:39:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.889 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.889 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.889 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:31.146 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.403 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:31.659 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:31.659 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:31.659 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:31.659 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:31.659 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:31.659 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:31.659 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.659 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:31.659 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.659 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.659 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:31.916 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.916 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.916 15:39:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:31.916 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:32.172 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:32.172 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:32.172 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:32.172 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.172 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:32.172 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:32.172 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:32.172 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.172 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.172 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:32.172 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.172 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.172 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.438 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:32.697 rmmod nvme_tcp 00:06:32.697 rmmod nvme_fabrics 00:06:32.697 rmmod nvme_keyring 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1825862 ']' 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1825862 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1825862 ']' 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1825862 00:06:32.697 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:32.956 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.956 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1825862 00:06:32.956 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:32.956 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:32.956 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1825862' 00:06:32.956 killing process with pid 1825862 00:06:32.956 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1825862 00:06:32.956 15:39:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1825862 00:06:32.956 15:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:32.956 15:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:32.956 15:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:32.956 15:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:32.956 15:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:32.956 15:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:32.956 15:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:32.956 15:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:32.956 15:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:32.956 15:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.956 15:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:32.956 15:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:35.493 00:06:35.493 real 0m47.753s 00:06:35.493 user 3m13.605s 00:06:35.493 sys 0m15.507s 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:35.493 ************************************ 00:06:35.493 END TEST nvmf_ns_hotplug_stress 00:06:35.493 ************************************ 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:35.493 ************************************ 00:06:35.493 START TEST nvmf_delete_subsystem 00:06:35.493 ************************************ 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:35.493 * Looking for test storage... 00:06:35.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:35.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.493 --rc genhtml_branch_coverage=1 00:06:35.493 --rc genhtml_function_coverage=1 00:06:35.493 --rc genhtml_legend=1 00:06:35.493 --rc geninfo_all_blocks=1 00:06:35.493 --rc geninfo_unexecuted_blocks=1 00:06:35.493 00:06:35.493 ' 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:35.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.493 --rc genhtml_branch_coverage=1 00:06:35.493 --rc genhtml_function_coverage=1 00:06:35.493 --rc genhtml_legend=1 00:06:35.493 --rc geninfo_all_blocks=1 00:06:35.493 --rc geninfo_unexecuted_blocks=1 00:06:35.493 00:06:35.493 ' 00:06:35.493 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:35.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.493 --rc genhtml_branch_coverage=1 00:06:35.493 --rc genhtml_function_coverage=1 00:06:35.494 --rc genhtml_legend=1 00:06:35.494 --rc geninfo_all_blocks=1 00:06:35.494 --rc geninfo_unexecuted_blocks=1 00:06:35.494 00:06:35.494 ' 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:35.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.494 --rc genhtml_branch_coverage=1 00:06:35.494 --rc genhtml_function_coverage=1 00:06:35.494 --rc genhtml_legend=1 00:06:35.494 --rc geninfo_all_blocks=1 00:06:35.494 --rc geninfo_unexecuted_blocks=1 00:06:35.494 00:06:35.494 ' 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:35.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:35.494 15:39:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:42.064 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:42.064 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:42.064 Found net devices under 0000:af:00.0: cvl_0_0 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:42.064 Found net devices under 0000:af:00.1: cvl_0_1 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:42.064 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:42.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:06:42.065 00:06:42.065 --- 10.0.0.2 ping statistics --- 00:06:42.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.065 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:06:42.065 00:06:42.065 --- 10.0.0.1 ping statistics --- 00:06:42.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.065 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1836736 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1836736 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1836736 ']' 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.065 [2024-12-09 15:39:36.522461] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:42.065 [2024-12-09 15:39:36.522505] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.065 [2024-12-09 15:39:36.601144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.065 [2024-12-09 15:39:36.639006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.065 [2024-12-09 15:39:36.639040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.065 [2024-12-09 15:39:36.639047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.065 [2024-12-09 15:39:36.639053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.065 [2024-12-09 15:39:36.639058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.065 [2024-12-09 15:39:36.640213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.065 [2024-12-09 15:39:36.640214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.065 [2024-12-09 15:39:36.784453] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.065 [2024-12-09 15:39:36.804679] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.065 NULL1 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.065 Delay0 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1836763 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:42.065 15:39:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:42.065 [2024-12-09 15:39:36.915559] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:43.958 15:39:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:43.958 15:39:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.958 15:39:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 Write completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 starting I/O failed: -6 00:06:43.958 Write completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 Write completed with error (sct=0, sc=8) 00:06:43.958 starting I/O failed: -6 00:06:43.958 Write completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 Write completed with error (sct=0, sc=8) 00:06:43.958 Write completed with error (sct=0, sc=8) 00:06:43.958 starting I/O failed: -6 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 Write completed with error (sct=0, sc=8) 00:06:43.958 starting I/O failed: -6 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 starting I/O failed: -6 00:06:43.958 Write completed with error (sct=0, sc=8) 00:06:43.958 Write completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 starting I/O failed: -6 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 Write completed with error (sct=0, sc=8) 00:06:43.958 Write completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 starting I/O failed: -6 00:06:43.958 Write completed with error (sct=0, sc=8) 00:06:43.958 Write completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 starting I/O failed: -6 00:06:43.958 Write completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.958 starting I/O failed: -6 00:06:43.958 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 [2024-12-09 15:39:39.032818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13502c0 is same with the state(6) to be set 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 starting I/O failed: -6 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 starting I/O failed: -6 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 starting I/O failed: -6 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 starting I/O failed: -6 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 starting I/O failed: -6 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 starting I/O failed: -6 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 starting I/O failed: -6 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 starting I/O failed: -6 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 starting I/O failed: -6 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 starting I/O failed: -6 00:06:43.959 Read completed with error (sct=0, sc=8) 00:06:43.959 Write completed with error (sct=0, sc=8) 00:06:43.959 [2024-12-09 15:39:39.034856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd0a800d490 is same with the state(6) to be set 00:06:44.890 [2024-12-09 15:39:40.011063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13519b0 is same with the state(6) to be set 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Write completed with error (sct=0, sc=8) 00:06:44.890 Write completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Write completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 [2024-12-09 15:39:40.036179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1350960 is same with the state(6) to be set 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Write completed with error (sct=0, sc=8) 00:06:44.890 Write completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Write completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Write completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Write completed with error (sct=0, sc=8) 00:06:44.890 [2024-12-09 15:39:40.037122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd0a8000c40 is same with the state(6) to be set 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Write completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Write completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Write completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.890 [2024-12-09 15:39:40.037259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd0a800d7c0 is same with the state(6) to be set 00:06:44.890 Read completed with error (sct=0, sc=8) 00:06:44.891 Write completed with error (sct=0, sc=8) 00:06:44.891 Write completed with error (sct=0, sc=8) 00:06:44.891 Read completed with error (sct=0, sc=8) 00:06:44.891 Write completed with error (sct=0, sc=8) 00:06:44.891 Read completed with error (sct=0, sc=8) 00:06:44.891 Read completed with error (sct=0, sc=8) 00:06:44.891 Read completed with error (sct=0, sc=8) 00:06:44.891 Write completed with error (sct=0, sc=8) 00:06:44.891 Read completed with error (sct=0, sc=8) 00:06:44.891 Read completed with error (sct=0, sc=8) 00:06:44.891 Read completed with error (sct=0, sc=8) 00:06:44.891 Write completed with error (sct=0, sc=8) 00:06:44.891 Read completed with error (sct=0, sc=8) 00:06:44.891 Read completed with error (sct=0, sc=8) 00:06:44.891 Read completed with error (sct=0, sc=8) 00:06:44.891 Write completed with error (sct=0, sc=8) 00:06:44.891 Read completed with error (sct=0, sc=8) 00:06:44.891 Write completed with error (sct=0, sc=8) 00:06:44.891 Read completed with error (sct=0, sc=8) 00:06:44.891 Read completed with error (sct=0, sc=8) 00:06:44.891 Write completed with error (sct=0, sc=8) 00:06:44.891 Read completed with error (sct=0, sc=8) 00:06:44.891 Read completed with error (sct=0, sc=8) 00:06:44.891 Write completed with error (sct=0, sc=8) 00:06:44.891 [2024-12-09 15:39:40.037828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd0a800d020 is same with the state(6) to be set 00:06:44.891 Initializing NVMe Controllers 00:06:44.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:44.891 Controller IO queue size 128, less than required. 00:06:44.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:44.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:44.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:44.891 Initialization complete. Launching workers. 00:06:44.891 ======================================================== 00:06:44.891 Latency(us) 00:06:44.891 Device Information : IOPS MiB/s Average min max 00:06:44.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 154.69 0.08 878507.92 251.40 1008190.34 00:06:44.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.65 0.08 1035048.89 676.83 2000850.06 00:06:44.891 ======================================================== 00:06:44.891 Total : 315.34 0.15 958259.86 251.40 2000850.06 00:06:44.891 00:06:44.891 [2024-12-09 15:39:40.038348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13519b0 (9): Bad file descriptor 00:06:44.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:44.891 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.891 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:44.891 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1836763 00:06:44.891 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1836763 00:06:45.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1836763) - No such process 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1836763 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1836763 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1836763 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.458 [2024-12-09 15:39:40.564274] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1837443 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1837443 00:06:45.458 15:39:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:45.458 [2024-12-09 15:39:40.647863] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:46.022 15:39:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:46.022 15:39:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1837443 00:06:46.022 15:39:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:46.587 15:39:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:46.587 15:39:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1837443 00:06:46.587 15:39:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:47.151 15:39:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:47.151 15:39:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1837443 00:06:47.151 15:39:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:47.408 15:39:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:47.408 15:39:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1837443 00:06:47.408 15:39:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:47.972 15:39:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:47.972 15:39:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1837443 00:06:47.972 15:39:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:48.535 15:39:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:48.535 15:39:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1837443 00:06:48.535 15:39:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:48.793 Initializing NVMe Controllers 00:06:48.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:48.793 Controller IO queue size 128, less than required. 00:06:48.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:48.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:48.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:48.793 Initialization complete. Launching workers. 00:06:48.793 ======================================================== 00:06:48.793 Latency(us) 00:06:48.793 Device Information : IOPS MiB/s Average min max 00:06:48.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002244.01 1000147.57 1008635.44 00:06:48.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003690.33 1000167.10 1010756.12 00:06:48.793 ======================================================== 00:06:48.793 Total : 256.00 0.12 1002967.17 1000147.57 1010756.12 00:06:48.793 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1837443 00:06:49.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1837443) - No such process 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1837443 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:49.050 rmmod nvme_tcp 00:06:49.050 rmmod nvme_fabrics 00:06:49.050 rmmod nvme_keyring 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1836736 ']' 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1836736 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1836736 ']' 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1836736 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.050 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1836736 00:06:49.051 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.051 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.051 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1836736' 00:06:49.051 killing process with pid 1836736 00:06:49.051 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1836736 00:06:49.051 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1836736 00:06:49.309 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:49.309 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:49.309 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:49.309 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:49.309 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:49.309 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:49.309 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:49.309 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:49.309 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:49.309 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.309 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.309 15:39:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:51.844 00:06:51.844 real 0m16.174s 00:06:51.844 user 0m29.249s 00:06:51.844 sys 0m5.486s 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.844 ************************************ 00:06:51.844 END TEST nvmf_delete_subsystem 00:06:51.844 ************************************ 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:51.844 ************************************ 00:06:51.844 START TEST nvmf_host_management 00:06:51.844 ************************************ 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:51.844 * Looking for test storage... 00:06:51.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:51.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.844 --rc genhtml_branch_coverage=1 00:06:51.844 --rc genhtml_function_coverage=1 00:06:51.844 --rc genhtml_legend=1 00:06:51.844 --rc geninfo_all_blocks=1 00:06:51.844 --rc geninfo_unexecuted_blocks=1 00:06:51.844 00:06:51.844 ' 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:51.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.844 --rc genhtml_branch_coverage=1 00:06:51.844 --rc genhtml_function_coverage=1 00:06:51.844 --rc genhtml_legend=1 00:06:51.844 --rc geninfo_all_blocks=1 00:06:51.844 --rc geninfo_unexecuted_blocks=1 00:06:51.844 00:06:51.844 ' 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:51.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.844 --rc genhtml_branch_coverage=1 00:06:51.844 --rc genhtml_function_coverage=1 00:06:51.844 --rc genhtml_legend=1 00:06:51.844 --rc geninfo_all_blocks=1 00:06:51.844 --rc geninfo_unexecuted_blocks=1 00:06:51.844 00:06:51.844 ' 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:51.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.844 --rc genhtml_branch_coverage=1 00:06:51.844 --rc genhtml_function_coverage=1 00:06:51.844 --rc genhtml_legend=1 00:06:51.844 --rc geninfo_all_blocks=1 00:06:51.844 --rc geninfo_unexecuted_blocks=1 00:06:51.844 00:06:51.844 ' 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.844 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:51.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:51.845 15:39:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:58.414 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:58.414 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:58.414 Found net devices under 0000:af:00.0: cvl_0_0 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:58.414 Found net devices under 0000:af:00.1: cvl_0_1 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:58.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:06:58.414 00:06:58.414 --- 10.0.0.2 ping statistics --- 00:06:58.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.414 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:06:58.414 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:06:58.414 00:06:58.414 --- 10.0.0.1 ping statistics --- 00:06:58.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.415 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1841424 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1841424 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1841424 ']' 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.415 15:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.415 [2024-12-09 15:39:52.810283] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:58.415 [2024-12-09 15:39:52.810333] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.415 [2024-12-09 15:39:52.889878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.415 [2024-12-09 15:39:52.931088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.415 [2024-12-09 15:39:52.931126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.415 [2024-12-09 15:39:52.931133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.415 [2024-12-09 15:39:52.931139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.415 [2024-12-09 15:39:52.931143] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.415 [2024-12-09 15:39:52.932688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.415 [2024-12-09 15:39:52.932798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.415 [2024-12-09 15:39:52.932905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.415 [2024-12-09 15:39:52.932907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.415 [2024-12-09 15:39:53.070336] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.415 Malloc0 00:06:58.415 [2024-12-09 15:39:53.150493] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1841679 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1841679 /var/tmp/bdevperf.sock 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1841679 ']' 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:58.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:58.415 { 00:06:58.415 "params": { 00:06:58.415 "name": "Nvme$subsystem", 00:06:58.415 "trtype": "$TEST_TRANSPORT", 00:06:58.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:58.415 "adrfam": "ipv4", 00:06:58.415 "trsvcid": "$NVMF_PORT", 00:06:58.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:58.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:58.415 "hdgst": ${hdgst:-false}, 00:06:58.415 "ddgst": ${ddgst:-false} 00:06:58.415 }, 00:06:58.415 "method": "bdev_nvme_attach_controller" 00:06:58.415 } 00:06:58.415 EOF 00:06:58.415 )") 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:58.415 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:58.415 "params": { 00:06:58.415 "name": "Nvme0", 00:06:58.415 "trtype": "tcp", 00:06:58.415 "traddr": "10.0.0.2", 00:06:58.415 "adrfam": "ipv4", 00:06:58.415 "trsvcid": "4420", 00:06:58.415 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:58.415 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:58.415 "hdgst": false, 00:06:58.415 "ddgst": false 00:06:58.415 }, 00:06:58.415 "method": "bdev_nvme_attach_controller" 00:06:58.415 }' 00:06:58.415 [2024-12-09 15:39:53.246029] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:58.415 [2024-12-09 15:39:53.246075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1841679 ] 00:06:58.415 [2024-12-09 15:39:53.318844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.415 [2024-12-09 15:39:53.358656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.682 Running I/O for 10 seconds... 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=98 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 98 -ge 100 ']' 00:06:58.682 15:39:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:58.973 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:58.973 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:58.973 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:58.973 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:58.973 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.973 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.973 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.973 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:06:58.973 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:06:58.973 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:58.973 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:58.973 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:58.973 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:58.973 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.973 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.973 [2024-12-09 15:39:54.081976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.973 [2024-12-09 15:39:54.082393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.973 [2024-12-09 15:39:54.082401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:58.974 [2024-12-09 15:39:54.082971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.974 [2024-12-09 15:39:54.082978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19df550 is same with the state(6) to be set 00:06:58.974 [2024-12-09 15:39:54.083912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:58.974 task offset: 100864 on job bdev=Nvme0n1 fails 00:06:58.974 00:06:58.974 Latency(us) 00:06:58.974 [2024-12-09T14:39:54.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:58.975 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:58.975 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:58.975 Verification LBA range: start 0x0 length 0x400 00:06:58.975 Nvme0n1 : 0.40 1911.83 119.49 159.32 0.00 30078.28 1599.39 26963.38 00:06:58.975 [2024-12-09T14:39:54.203Z] =================================================================================================================== 00:06:58.975 [2024-12-09T14:39:54.203Z] Total : 1911.83 119.49 159.32 0.00 30078.28 1599.39 26963.38 00:06:58.975 [2024-12-09 15:39:54.086340] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.975 [2024-12-09 15:39:54.086362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cbaa0 (9): Bad file descriptor 00:06:58.975 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.975 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:58.975 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.975 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:58.975 [2024-12-09 15:39:54.093436] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:58.975 [2024-12-09 15:39:54.093525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:58.975 [2024-12-09 15:39:54.093548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:58.975 [2024-12-09 15:39:54.093564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:58.975 [2024-12-09 15:39:54.093573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:58.975 [2024-12-09 15:39:54.093583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:58.975 [2024-12-09 15:39:54.093590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19cbaa0 00:06:58.975 [2024-12-09 15:39:54.093609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cbaa0 (9): Bad file descriptor 00:06:58.975 [2024-12-09 15:39:54.093620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:06:58.975 [2024-12-09 15:39:54.093627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:06:58.975 [2024-12-09 15:39:54.093635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:06:58.975 [2024-12-09 15:39:54.093643] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:06:58.975 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.975 15:39:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:59.974 15:39:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1841679 00:06:59.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1841679) - No such process 00:06:59.974 15:39:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:59.974 15:39:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:59.974 15:39:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:59.974 15:39:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:59.974 15:39:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:59.974 15:39:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:59.974 15:39:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:59.974 15:39:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:59.974 { 00:06:59.974 "params": { 00:06:59.974 "name": "Nvme$subsystem", 00:06:59.974 "trtype": "$TEST_TRANSPORT", 00:06:59.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:59.974 "adrfam": "ipv4", 00:06:59.974 "trsvcid": "$NVMF_PORT", 00:06:59.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:59.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:59.974 "hdgst": ${hdgst:-false}, 00:06:59.974 "ddgst": ${ddgst:-false} 00:06:59.974 }, 00:06:59.974 "method": "bdev_nvme_attach_controller" 00:06:59.974 } 00:06:59.974 EOF 00:06:59.974 )") 00:06:59.974 15:39:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:59.974 15:39:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:59.974 15:39:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:59.974 15:39:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:59.974 "params": { 00:06:59.974 "name": "Nvme0", 00:06:59.974 "trtype": "tcp", 00:06:59.974 "traddr": "10.0.0.2", 00:06:59.974 "adrfam": "ipv4", 00:06:59.974 "trsvcid": "4420", 00:06:59.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:59.974 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:59.974 "hdgst": false, 00:06:59.974 "ddgst": false 00:06:59.974 }, 00:06:59.974 "method": "bdev_nvme_attach_controller" 00:06:59.974 }' 00:06:59.974 [2024-12-09 15:39:55.155192] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:06:59.974 [2024-12-09 15:39:55.155244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1841937 ] 00:07:00.231 [2024-12-09 15:39:55.228122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.231 [2024-12-09 15:39:55.266302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.488 Running I/O for 1 seconds... 00:07:01.421 2007.00 IOPS, 125.44 MiB/s 00:07:01.421 Latency(us) 00:07:01.421 [2024-12-09T14:39:56.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:01.421 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:01.421 Verification LBA range: start 0x0 length 0x400 00:07:01.421 Nvme0n1 : 1.01 2053.25 128.33 0.00 0.00 30580.59 1755.43 26838.55 00:07:01.421 [2024-12-09T14:39:56.649Z] =================================================================================================================== 00:07:01.421 [2024-12-09T14:39:56.649Z] Total : 2053.25 128.33 0.00 0.00 30580.59 1755.43 26838.55 00:07:01.421 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:01.421 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:01.421 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:01.421 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:01.679 rmmod nvme_tcp 00:07:01.679 rmmod nvme_fabrics 00:07:01.679 rmmod nvme_keyring 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1841424 ']' 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1841424 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1841424 ']' 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1841424 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1841424 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1841424' 00:07:01.679 killing process with pid 1841424 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1841424 00:07:01.679 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1841424 00:07:01.938 [2024-12-09 15:39:56.926735] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:01.938 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:01.938 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:01.938 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:01.938 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:01.938 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:01.938 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:01.938 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:01.938 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:01.938 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:01.939 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.939 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.939 15:39:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.844 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:03.844 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:03.844 00:07:03.844 real 0m12.494s 00:07:03.844 user 0m20.018s 00:07:03.844 sys 0m5.591s 00:07:03.844 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.844 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:03.844 ************************************ 00:07:03.844 END TEST nvmf_host_management 00:07:03.844 ************************************ 00:07:03.844 15:39:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:03.844 15:39:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.844 15:39:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.844 15:39:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:04.104 ************************************ 00:07:04.104 START TEST nvmf_lvol 00:07:04.104 ************************************ 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:04.104 * Looking for test storage... 00:07:04.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.104 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:04.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.104 --rc genhtml_branch_coverage=1 00:07:04.104 --rc genhtml_function_coverage=1 00:07:04.104 --rc genhtml_legend=1 00:07:04.104 --rc geninfo_all_blocks=1 00:07:04.104 --rc geninfo_unexecuted_blocks=1 00:07:04.104 00:07:04.104 ' 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:04.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.105 --rc genhtml_branch_coverage=1 00:07:04.105 --rc genhtml_function_coverage=1 00:07:04.105 --rc genhtml_legend=1 00:07:04.105 --rc geninfo_all_blocks=1 00:07:04.105 --rc geninfo_unexecuted_blocks=1 00:07:04.105 00:07:04.105 ' 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:04.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.105 --rc genhtml_branch_coverage=1 00:07:04.105 --rc genhtml_function_coverage=1 00:07:04.105 --rc genhtml_legend=1 00:07:04.105 --rc geninfo_all_blocks=1 00:07:04.105 --rc geninfo_unexecuted_blocks=1 00:07:04.105 00:07:04.105 ' 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:04.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.105 --rc genhtml_branch_coverage=1 00:07:04.105 --rc genhtml_function_coverage=1 00:07:04.105 --rc genhtml_legend=1 00:07:04.105 --rc geninfo_all_blocks=1 00:07:04.105 --rc geninfo_unexecuted_blocks=1 00:07:04.105 00:07:04.105 ' 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:04.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:04.105 15:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:10.674 15:40:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:10.674 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:10.674 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:10.674 Found net devices under 0000:af:00.0: cvl_0_0 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.674 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:10.675 Found net devices under 0000:af:00.1: cvl_0_1 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:10.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:07:10.675 00:07:10.675 --- 10.0.0.2 ping statistics --- 00:07:10.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.675 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:07:10.675 00:07:10.675 --- 10.0.0.1 ping statistics --- 00:07:10.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.675 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1845676 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1845676 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1845676 ']' 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:10.675 [2024-12-09 15:40:05.341789] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:10.675 [2024-12-09 15:40:05.341838] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.675 [2024-12-09 15:40:05.419412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.675 [2024-12-09 15:40:05.459763] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.675 [2024-12-09 15:40:05.459799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.675 [2024-12-09 15:40:05.459806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.675 [2024-12-09 15:40:05.459812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.675 [2024-12-09 15:40:05.459817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.675 [2024-12-09 15:40:05.461138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.675 [2024-12-09 15:40:05.461270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.675 [2024-12-09 15:40:05.461270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:10.675 [2024-12-09 15:40:05.766944] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.675 15:40:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:10.933 15:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:10.933 15:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:11.190 15:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:11.190 15:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:11.448 15:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:11.448 15:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=771baa01-7391-4c25-9edb-44bdf25b5265 00:07:11.448 15:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 771baa01-7391-4c25-9edb-44bdf25b5265 lvol 20 00:07:11.705 15:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0716d0f0-2623-42fd-a8f1-670270d2727a 00:07:11.706 15:40:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:11.963 15:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0716d0f0-2623-42fd-a8f1-670270d2727a 00:07:12.220 15:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:12.220 [2024-12-09 15:40:07.427411] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.477 15:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:12.477 15:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1846159 00:07:12.477 15:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:12.477 15:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:13.854 15:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0716d0f0-2623-42fd-a8f1-670270d2727a MY_SNAPSHOT 00:07:13.854 15:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=34ce67d5-cd87-4df8-a2f7-23c03b1eb2fd 00:07:13.854 15:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0716d0f0-2623-42fd-a8f1-670270d2727a 30 00:07:14.113 15:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 34ce67d5-cd87-4df8-a2f7-23c03b1eb2fd MY_CLONE 00:07:14.370 15:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=148b1142-6984-4e5b-a871-e5efcf9f3442 00:07:14.370 15:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 148b1142-6984-4e5b-a871-e5efcf9f3442 00:07:14.934 15:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1846159 00:07:23.033 Initializing NVMe Controllers 00:07:23.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:23.033 Controller IO queue size 128, less than required. 00:07:23.033 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:23.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:23.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:23.033 Initialization complete. Launching workers. 00:07:23.033 ======================================================== 00:07:23.033 Latency(us) 00:07:23.033 Device Information : IOPS MiB/s Average min max 00:07:23.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12483.30 48.76 10259.69 1501.12 95731.58 00:07:23.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12307.00 48.07 10405.21 3622.01 40408.79 00:07:23.033 ======================================================== 00:07:23.033 Total : 24790.30 96.84 10331.93 1501.12 95731.58 00:07:23.033 00:07:23.033 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:23.033 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0716d0f0-2623-42fd-a8f1-670270d2727a 00:07:23.290 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 771baa01-7391-4c25-9edb-44bdf25b5265 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:23.548 rmmod nvme_tcp 00:07:23.548 rmmod nvme_fabrics 00:07:23.548 rmmod nvme_keyring 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1845676 ']' 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1845676 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1845676 ']' 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1845676 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.548 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1845676 00:07:23.807 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.807 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.807 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1845676' 00:07:23.807 killing process with pid 1845676 00:07:23.807 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1845676 00:07:23.807 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1845676 00:07:23.807 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:23.807 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:23.807 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:23.807 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:23.807 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:23.807 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:23.807 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:23.807 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:23.807 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:23.807 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.807 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.807 15:40:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:26.345 00:07:26.345 real 0m21.955s 00:07:26.345 user 1m3.112s 00:07:26.345 sys 0m7.681s 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:26.345 ************************************ 00:07:26.345 END TEST nvmf_lvol 00:07:26.345 ************************************ 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.345 ************************************ 00:07:26.345 START TEST nvmf_lvs_grow 00:07:26.345 ************************************ 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:26.345 * Looking for test storage... 00:07:26.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:26.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.345 --rc genhtml_branch_coverage=1 00:07:26.345 --rc genhtml_function_coverage=1 00:07:26.345 --rc genhtml_legend=1 00:07:26.345 --rc geninfo_all_blocks=1 00:07:26.345 --rc geninfo_unexecuted_blocks=1 00:07:26.345 00:07:26.345 ' 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:26.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.345 --rc genhtml_branch_coverage=1 00:07:26.345 --rc genhtml_function_coverage=1 00:07:26.345 --rc genhtml_legend=1 00:07:26.345 --rc geninfo_all_blocks=1 00:07:26.345 --rc geninfo_unexecuted_blocks=1 00:07:26.345 00:07:26.345 ' 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:26.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.345 --rc genhtml_branch_coverage=1 00:07:26.345 --rc genhtml_function_coverage=1 00:07:26.345 --rc genhtml_legend=1 00:07:26.345 --rc geninfo_all_blocks=1 00:07:26.345 --rc geninfo_unexecuted_blocks=1 00:07:26.345 00:07:26.345 ' 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:26.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.345 --rc genhtml_branch_coverage=1 00:07:26.345 --rc genhtml_function_coverage=1 00:07:26.345 --rc genhtml_legend=1 00:07:26.345 --rc geninfo_all_blocks=1 00:07:26.345 --rc geninfo_unexecuted_blocks=1 00:07:26.345 00:07:26.345 ' 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.345 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:26.346 15:40:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:32.916 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:32.916 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:32.916 Found net devices under 0000:af:00.0: cvl_0_0 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:32.916 Found net devices under 0000:af:00.1: cvl_0_1 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:32.916 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:32.917 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:32.917 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:32.917 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:32.917 15:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:32.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:07:32.917 00:07:32.917 --- 10.0.0.2 ping statistics --- 00:07:32.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.917 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:07:32.917 00:07:32.917 --- 10.0.0.1 ping statistics --- 00:07:32.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.917 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1851481 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1851481 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1851481 ']' 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.917 [2024-12-09 15:40:27.349457] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:32.917 [2024-12-09 15:40:27.349506] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.917 [2024-12-09 15:40:27.427076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.917 [2024-12-09 15:40:27.466468] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.917 [2024-12-09 15:40:27.466503] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.917 [2024-12-09 15:40:27.466511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.917 [2024-12-09 15:40:27.466517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.917 [2024-12-09 15:40:27.466522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.917 [2024-12-09 15:40:27.467047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:32.917 [2024-12-09 15:40:27.774678] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.917 ************************************ 00:07:32.917 START TEST lvs_grow_clean 00:07:32.917 ************************************ 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:32.917 15:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:32.917 15:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:32.917 15:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:33.176 15:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=817d91fa-a7b9-4b8a-a65b-b91bf8857749 00:07:33.176 15:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 817d91fa-a7b9-4b8a-a65b-b91bf8857749 00:07:33.176 15:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:33.434 15:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:33.434 15:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:33.434 15:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 817d91fa-a7b9-4b8a-a65b-b91bf8857749 lvol 150 00:07:33.434 15:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8ed9018e-ed5b-4f35-8709-fd056e622c28 00:07:33.434 15:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:33.434 15:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:33.693 [2024-12-09 15:40:28.796149] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:33.693 [2024-12-09 15:40:28.796196] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:33.693 true 00:07:33.693 15:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 817d91fa-a7b9-4b8a-a65b-b91bf8857749 00:07:33.693 15:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:33.951 15:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:33.951 15:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:34.210 15:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8ed9018e-ed5b-4f35-8709-fd056e622c28 00:07:34.210 15:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:34.468 [2024-12-09 15:40:29.526324] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.468 15:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.727 15:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1851970 00:07:34.727 15:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:34.727 15:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:34.727 15:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1851970 /var/tmp/bdevperf.sock 00:07:34.727 15:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1851970 ']' 00:07:34.727 15:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:34.727 15:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.727 15:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:34.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:34.727 15:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.727 15:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:34.727 [2024-12-09 15:40:29.767429] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:34.727 [2024-12-09 15:40:29.767473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851970 ] 00:07:34.727 [2024-12-09 15:40:29.842099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.727 [2024-12-09 15:40:29.880661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.984 15:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.984 15:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:34.985 15:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:35.242 Nvme0n1 00:07:35.242 15:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:35.242 [ 00:07:35.242 { 00:07:35.242 "name": "Nvme0n1", 00:07:35.242 "aliases": [ 00:07:35.242 "8ed9018e-ed5b-4f35-8709-fd056e622c28" 00:07:35.242 ], 00:07:35.242 "product_name": "NVMe disk", 00:07:35.242 "block_size": 4096, 00:07:35.242 "num_blocks": 38912, 00:07:35.242 "uuid": "8ed9018e-ed5b-4f35-8709-fd056e622c28", 00:07:35.242 "numa_id": 1, 00:07:35.242 "assigned_rate_limits": { 00:07:35.242 "rw_ios_per_sec": 0, 00:07:35.242 "rw_mbytes_per_sec": 0, 00:07:35.242 "r_mbytes_per_sec": 0, 00:07:35.242 "w_mbytes_per_sec": 0 00:07:35.242 }, 00:07:35.242 "claimed": false, 00:07:35.242 "zoned": false, 00:07:35.242 "supported_io_types": { 00:07:35.242 "read": true, 00:07:35.242 "write": true, 00:07:35.242 "unmap": true, 00:07:35.242 "flush": true, 00:07:35.242 "reset": true, 00:07:35.242 "nvme_admin": true, 00:07:35.242 "nvme_io": true, 00:07:35.242 "nvme_io_md": false, 00:07:35.242 "write_zeroes": true, 00:07:35.242 "zcopy": false, 00:07:35.242 "get_zone_info": false, 00:07:35.242 "zone_management": false, 00:07:35.242 "zone_append": false, 00:07:35.242 "compare": true, 00:07:35.242 "compare_and_write": true, 00:07:35.242 "abort": true, 00:07:35.242 "seek_hole": false, 00:07:35.242 "seek_data": false, 00:07:35.242 "copy": true, 00:07:35.242 "nvme_iov_md": false 00:07:35.242 }, 00:07:35.242 "memory_domains": [ 00:07:35.242 { 00:07:35.242 "dma_device_id": "system", 00:07:35.242 "dma_device_type": 1 00:07:35.242 } 00:07:35.242 ], 00:07:35.242 "driver_specific": { 00:07:35.242 "nvme": [ 00:07:35.242 { 00:07:35.242 "trid": { 00:07:35.242 "trtype": "TCP", 00:07:35.242 "adrfam": "IPv4", 00:07:35.242 "traddr": "10.0.0.2", 00:07:35.242 "trsvcid": "4420", 00:07:35.242 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:35.242 }, 00:07:35.242 "ctrlr_data": { 00:07:35.242 "cntlid": 1, 00:07:35.242 "vendor_id": "0x8086", 00:07:35.242 "model_number": "SPDK bdev Controller", 00:07:35.242 "serial_number": "SPDK0", 00:07:35.242 "firmware_revision": "25.01", 00:07:35.242 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:35.242 "oacs": { 00:07:35.242 "security": 0, 00:07:35.242 "format": 0, 00:07:35.242 "firmware": 0, 00:07:35.242 "ns_manage": 0 00:07:35.242 }, 00:07:35.242 "multi_ctrlr": true, 00:07:35.242 "ana_reporting": false 00:07:35.242 }, 00:07:35.242 "vs": { 00:07:35.242 "nvme_version": "1.3" 00:07:35.242 }, 00:07:35.243 "ns_data": { 00:07:35.243 "id": 1, 00:07:35.243 "can_share": true 00:07:35.243 } 00:07:35.243 } 00:07:35.243 ], 00:07:35.243 "mp_policy": "active_passive" 00:07:35.243 } 00:07:35.243 } 00:07:35.243 ] 00:07:35.243 15:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1852119 00:07:35.243 15:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:35.243 15:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:35.501 Running I/O for 10 seconds... 00:07:36.440 Latency(us) 00:07:36.440 [2024-12-09T14:40:31.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.440 Nvme0n1 : 1.00 23436.00 91.55 0.00 0.00 0.00 0.00 0.00 00:07:36.440 [2024-12-09T14:40:31.668Z] =================================================================================================================== 00:07:36.440 [2024-12-09T14:40:31.668Z] Total : 23436.00 91.55 0.00 0.00 0.00 0.00 0.00 00:07:36.440 00:07:37.373 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 817d91fa-a7b9-4b8a-a65b-b91bf8857749 00:07:37.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.373 Nvme0n1 : 2.00 23598.50 92.18 0.00 0.00 0.00 0.00 0.00 00:07:37.373 [2024-12-09T14:40:32.601Z] =================================================================================================================== 00:07:37.373 [2024-12-09T14:40:32.601Z] Total : 23598.50 92.18 0.00 0.00 0.00 0.00 0.00 00:07:37.373 00:07:37.631 true 00:07:37.631 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 817d91fa-a7b9-4b8a-a65b-b91bf8857749 00:07:37.631 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:37.889 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:37.889 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:37.889 15:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1852119 00:07:38.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.455 Nvme0n1 : 3.00 23671.33 92.47 0.00 0.00 0.00 0.00 0.00 00:07:38.455 [2024-12-09T14:40:33.683Z] =================================================================================================================== 00:07:38.455 [2024-12-09T14:40:33.683Z] Total : 23671.33 92.47 0.00 0.00 0.00 0.00 0.00 00:07:38.455 00:07:39.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.388 Nvme0n1 : 4.00 23715.75 92.64 0.00 0.00 0.00 0.00 0.00 00:07:39.388 [2024-12-09T14:40:34.616Z] =================================================================================================================== 00:07:39.388 [2024-12-09T14:40:34.616Z] Total : 23715.75 92.64 0.00 0.00 0.00 0.00 0.00 00:07:39.388 00:07:40.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.761 Nvme0n1 : 5.00 23763.20 92.83 0.00 0.00 0.00 0.00 0.00 00:07:40.761 [2024-12-09T14:40:35.989Z] =================================================================================================================== 00:07:40.761 [2024-12-09T14:40:35.989Z] Total : 23763.20 92.83 0.00 0.00 0.00 0.00 0.00 00:07:40.761 00:07:41.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.695 Nvme0n1 : 6.00 23794.17 92.95 0.00 0.00 0.00 0.00 0.00 00:07:41.695 [2024-12-09T14:40:36.923Z] =================================================================================================================== 00:07:41.695 [2024-12-09T14:40:36.923Z] Total : 23794.17 92.95 0.00 0.00 0.00 0.00 0.00 00:07:41.695 00:07:42.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.628 Nvme0n1 : 7.00 23816.71 93.03 0.00 0.00 0.00 0.00 0.00 00:07:42.628 [2024-12-09T14:40:37.857Z] =================================================================================================================== 00:07:42.629 [2024-12-09T14:40:37.857Z] Total : 23816.71 93.03 0.00 0.00 0.00 0.00 0.00 00:07:42.629 00:07:43.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.562 Nvme0n1 : 8.00 23833.62 93.10 0.00 0.00 0.00 0.00 0.00 00:07:43.562 [2024-12-09T14:40:38.790Z] =================================================================================================================== 00:07:43.562 [2024-12-09T14:40:38.790Z] Total : 23833.62 93.10 0.00 0.00 0.00 0.00 0.00 00:07:43.562 00:07:44.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.495 Nvme0n1 : 9.00 23850.22 93.16 0.00 0.00 0.00 0.00 0.00 00:07:44.495 [2024-12-09T14:40:39.723Z] =================================================================================================================== 00:07:44.495 [2024-12-09T14:40:39.723Z] Total : 23850.22 93.16 0.00 0.00 0.00 0.00 0.00 00:07:44.495 00:07:45.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.429 Nvme0n1 : 10.00 23822.00 93.05 0.00 0.00 0.00 0.00 0.00 00:07:45.429 [2024-12-09T14:40:40.657Z] =================================================================================================================== 00:07:45.429 [2024-12-09T14:40:40.657Z] Total : 23822.00 93.05 0.00 0.00 0.00 0.00 0.00 00:07:45.429 00:07:45.429 00:07:45.429 Latency(us) 00:07:45.429 [2024-12-09T14:40:40.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.429 Nvme0n1 : 10.01 23821.05 93.05 0.00 0.00 5370.47 1529.17 10173.68 00:07:45.429 [2024-12-09T14:40:40.657Z] =================================================================================================================== 00:07:45.429 [2024-12-09T14:40:40.657Z] Total : 23821.05 93.05 0.00 0.00 5370.47 1529.17 10173.68 00:07:45.429 { 00:07:45.429 "results": [ 00:07:45.429 { 00:07:45.429 "job": "Nvme0n1", 00:07:45.429 "core_mask": "0x2", 00:07:45.429 "workload": "randwrite", 00:07:45.429 "status": "finished", 00:07:45.429 "queue_depth": 128, 00:07:45.429 "io_size": 4096, 00:07:45.429 "runtime": 10.005773, 00:07:45.429 "iops": 23821.048108926716, 00:07:45.429 "mibps": 93.05096917549498, 00:07:45.429 "io_failed": 0, 00:07:45.429 "io_timeout": 0, 00:07:45.429 "avg_latency_us": 5370.471263994144, 00:07:45.429 "min_latency_us": 1529.1733333333334, 00:07:45.429 "max_latency_us": 10173.683809523809 00:07:45.429 } 00:07:45.429 ], 00:07:45.429 "core_count": 1 00:07:45.429 } 00:07:45.429 15:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1851970 00:07:45.429 15:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1851970 ']' 00:07:45.429 15:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1851970 00:07:45.429 15:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:45.429 15:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.429 15:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1851970 00:07:45.429 15:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:45.429 15:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:45.429 15:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1851970' 00:07:45.429 killing process with pid 1851970 00:07:45.429 15:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1851970 00:07:45.429 Received shutdown signal, test time was about 10.000000 seconds 00:07:45.429 00:07:45.429 Latency(us) 00:07:45.429 [2024-12-09T14:40:40.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.429 [2024-12-09T14:40:40.657Z] =================================================================================================================== 00:07:45.429 [2024-12-09T14:40:40.657Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:45.429 15:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1851970 00:07:45.687 15:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:45.946 15:40:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:46.204 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 817d91fa-a7b9-4b8a-a65b-b91bf8857749 00:07:46.204 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:46.204 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:46.204 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:46.204 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:46.463 [2024-12-09 15:40:41.584962] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:46.463 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 817d91fa-a7b9-4b8a-a65b-b91bf8857749 00:07:46.463 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:46.463 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 817d91fa-a7b9-4b8a-a65b-b91bf8857749 00:07:46.463 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.463 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.463 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.463 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.463 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.463 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.463 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.463 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:46.463 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 817d91fa-a7b9-4b8a-a65b-b91bf8857749 00:07:46.721 request: 00:07:46.721 { 00:07:46.721 "uuid": "817d91fa-a7b9-4b8a-a65b-b91bf8857749", 00:07:46.721 "method": "bdev_lvol_get_lvstores", 00:07:46.721 "req_id": 1 00:07:46.721 } 00:07:46.721 Got JSON-RPC error response 00:07:46.721 response: 00:07:46.721 { 00:07:46.721 "code": -19, 00:07:46.721 "message": "No such device" 00:07:46.721 } 00:07:46.721 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:46.721 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:46.721 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:46.722 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:46.722 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:46.980 aio_bdev 00:07:46.980 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8ed9018e-ed5b-4f35-8709-fd056e622c28 00:07:46.980 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=8ed9018e-ed5b-4f35-8709-fd056e622c28 00:07:46.980 15:40:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:46.980 15:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:46.980 15:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:46.980 15:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:46.980 15:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:46.980 15:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8ed9018e-ed5b-4f35-8709-fd056e622c28 -t 2000 00:07:47.239 [ 00:07:47.239 { 00:07:47.239 "name": "8ed9018e-ed5b-4f35-8709-fd056e622c28", 00:07:47.239 "aliases": [ 00:07:47.239 "lvs/lvol" 00:07:47.239 ], 00:07:47.239 "product_name": "Logical Volume", 00:07:47.239 "block_size": 4096, 00:07:47.239 "num_blocks": 38912, 00:07:47.239 "uuid": "8ed9018e-ed5b-4f35-8709-fd056e622c28", 00:07:47.239 "assigned_rate_limits": { 00:07:47.239 "rw_ios_per_sec": 0, 00:07:47.239 "rw_mbytes_per_sec": 0, 00:07:47.239 "r_mbytes_per_sec": 0, 00:07:47.239 "w_mbytes_per_sec": 0 00:07:47.239 }, 00:07:47.239 "claimed": false, 00:07:47.239 "zoned": false, 00:07:47.239 "supported_io_types": { 00:07:47.239 "read": true, 00:07:47.239 "write": true, 00:07:47.239 "unmap": true, 00:07:47.239 "flush": false, 00:07:47.239 "reset": true, 00:07:47.239 "nvme_admin": false, 00:07:47.239 "nvme_io": false, 00:07:47.239 "nvme_io_md": false, 00:07:47.239 "write_zeroes": true, 00:07:47.239 "zcopy": false, 00:07:47.239 "get_zone_info": false, 00:07:47.239 "zone_management": false, 00:07:47.239 "zone_append": false, 00:07:47.239 "compare": false, 00:07:47.239 "compare_and_write": false, 00:07:47.239 "abort": false, 00:07:47.239 "seek_hole": true, 00:07:47.239 "seek_data": true, 00:07:47.239 "copy": false, 00:07:47.239 "nvme_iov_md": false 00:07:47.239 }, 00:07:47.239 "driver_specific": { 00:07:47.239 "lvol": { 00:07:47.239 "lvol_store_uuid": "817d91fa-a7b9-4b8a-a65b-b91bf8857749", 00:07:47.239 "base_bdev": "aio_bdev", 00:07:47.239 "thin_provision": false, 00:07:47.239 "num_allocated_clusters": 38, 00:07:47.239 "snapshot": false, 00:07:47.239 "clone": false, 00:07:47.239 "esnap_clone": false 00:07:47.239 } 00:07:47.239 } 00:07:47.239 } 00:07:47.239 ] 00:07:47.239 15:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:47.239 15:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 817d91fa-a7b9-4b8a-a65b-b91bf8857749 00:07:47.239 15:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:47.498 15:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:47.498 15:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 817d91fa-a7b9-4b8a-a65b-b91bf8857749 00:07:47.498 15:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:47.756 15:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:47.756 15:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8ed9018e-ed5b-4f35-8709-fd056e622c28 00:07:47.756 15:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 817d91fa-a7b9-4b8a-a65b-b91bf8857749 00:07:48.014 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:48.273 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:48.273 00:07:48.273 real 0m15.508s 00:07:48.273 user 0m15.092s 00:07:48.273 sys 0m1.445s 00:07:48.273 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.273 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:48.273 ************************************ 00:07:48.273 END TEST lvs_grow_clean 00:07:48.273 ************************************ 00:07:48.273 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:48.273 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:48.273 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.273 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:48.273 ************************************ 00:07:48.273 START TEST lvs_grow_dirty 00:07:48.273 ************************************ 00:07:48.273 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:48.273 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:48.273 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:48.273 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:48.273 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:48.273 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:48.273 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:48.273 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:48.273 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:48.273 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:48.532 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:48.532 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:48.790 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e47d614b-63b7-49e4-b6ed-21374f9e75a8 00:07:48.790 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e47d614b-63b7-49e4-b6ed-21374f9e75a8 00:07:48.790 15:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:48.790 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:48.790 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:48.791 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e47d614b-63b7-49e4-b6ed-21374f9e75a8 lvol 150 00:07:49.049 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b64a7488-792d-426d-ab4d-2330c80e9caf 00:07:49.049 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:49.049 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:49.307 [2024-12-09 15:40:44.373144] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:49.307 [2024-12-09 15:40:44.373196] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:49.307 true 00:07:49.307 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e47d614b-63b7-49e4-b6ed-21374f9e75a8 00:07:49.307 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:49.566 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:49.566 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:49.566 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b64a7488-792d-426d-ab4d-2330c80e9caf 00:07:49.824 15:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:50.081 [2024-12-09 15:40:45.103315] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.081 15:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:50.081 15:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1854555 00:07:50.081 15:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:50.081 15:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:50.081 15:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1854555 /var/tmp/bdevperf.sock 00:07:50.081 15:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1854555 ']' 00:07:50.081 15:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:50.081 15:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.081 15:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:50.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:50.081 15:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.081 15:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:50.337 [2024-12-09 15:40:45.339753] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:07:50.337 [2024-12-09 15:40:45.339800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1854555 ] 00:07:50.337 [2024-12-09 15:40:45.415134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.337 [2024-12-09 15:40:45.455463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.269 15:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.269 15:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:51.269 15:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:51.527 Nvme0n1 00:07:51.527 15:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:51.527 [ 00:07:51.527 { 00:07:51.527 "name": "Nvme0n1", 00:07:51.527 "aliases": [ 00:07:51.527 "b64a7488-792d-426d-ab4d-2330c80e9caf" 00:07:51.527 ], 00:07:51.527 "product_name": "NVMe disk", 00:07:51.527 "block_size": 4096, 00:07:51.527 "num_blocks": 38912, 00:07:51.527 "uuid": "b64a7488-792d-426d-ab4d-2330c80e9caf", 00:07:51.527 "numa_id": 1, 00:07:51.527 "assigned_rate_limits": { 00:07:51.527 "rw_ios_per_sec": 0, 00:07:51.527 "rw_mbytes_per_sec": 0, 00:07:51.527 "r_mbytes_per_sec": 0, 00:07:51.527 "w_mbytes_per_sec": 0 00:07:51.527 }, 00:07:51.527 "claimed": false, 00:07:51.527 "zoned": false, 00:07:51.527 "supported_io_types": { 00:07:51.527 "read": true, 00:07:51.527 "write": true, 00:07:51.527 "unmap": true, 00:07:51.527 "flush": true, 00:07:51.527 "reset": true, 00:07:51.527 "nvme_admin": true, 00:07:51.527 "nvme_io": true, 00:07:51.527 "nvme_io_md": false, 00:07:51.527 "write_zeroes": true, 00:07:51.527 "zcopy": false, 00:07:51.527 "get_zone_info": false, 00:07:51.527 "zone_management": false, 00:07:51.527 "zone_append": false, 00:07:51.527 "compare": true, 00:07:51.527 "compare_and_write": true, 00:07:51.527 "abort": true, 00:07:51.527 "seek_hole": false, 00:07:51.527 "seek_data": false, 00:07:51.527 "copy": true, 00:07:51.527 "nvme_iov_md": false 00:07:51.527 }, 00:07:51.527 "memory_domains": [ 00:07:51.527 { 00:07:51.527 "dma_device_id": "system", 00:07:51.527 "dma_device_type": 1 00:07:51.527 } 00:07:51.527 ], 00:07:51.527 "driver_specific": { 00:07:51.527 "nvme": [ 00:07:51.527 { 00:07:51.527 "trid": { 00:07:51.527 "trtype": "TCP", 00:07:51.527 "adrfam": "IPv4", 00:07:51.527 "traddr": "10.0.0.2", 00:07:51.527 "trsvcid": "4420", 00:07:51.527 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:51.527 }, 00:07:51.527 "ctrlr_data": { 00:07:51.527 "cntlid": 1, 00:07:51.527 "vendor_id": "0x8086", 00:07:51.527 "model_number": "SPDK bdev Controller", 00:07:51.527 "serial_number": "SPDK0", 00:07:51.527 "firmware_revision": "25.01", 00:07:51.527 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:51.528 "oacs": { 00:07:51.528 "security": 0, 00:07:51.528 "format": 0, 00:07:51.528 "firmware": 0, 00:07:51.528 "ns_manage": 0 00:07:51.528 }, 00:07:51.528 "multi_ctrlr": true, 00:07:51.528 "ana_reporting": false 00:07:51.528 }, 00:07:51.528 "vs": { 00:07:51.528 "nvme_version": "1.3" 00:07:51.528 }, 00:07:51.528 "ns_data": { 00:07:51.528 "id": 1, 00:07:51.528 "can_share": true 00:07:51.528 } 00:07:51.528 } 00:07:51.528 ], 00:07:51.528 "mp_policy": "active_passive" 00:07:51.528 } 00:07:51.528 } 00:07:51.528 ] 00:07:51.528 15:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1854793 00:07:51.528 15:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:51.528 15:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:51.786 Running I/O for 10 seconds... 00:07:52.720 Latency(us) 00:07:52.720 [2024-12-09T14:40:47.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.720 Nvme0n1 : 1.00 23566.00 92.05 0.00 0.00 0.00 0.00 0.00 00:07:52.720 [2024-12-09T14:40:47.948Z] =================================================================================================================== 00:07:52.720 [2024-12-09T14:40:47.948Z] Total : 23566.00 92.05 0.00 0.00 0.00 0.00 0.00 00:07:52.720 00:07:53.726 15:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e47d614b-63b7-49e4-b6ed-21374f9e75a8 00:07:53.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.726 Nvme0n1 : 2.00 23635.00 92.32 0.00 0.00 0.00 0.00 0.00 00:07:53.726 [2024-12-09T14:40:48.954Z] =================================================================================================================== 00:07:53.726 [2024-12-09T14:40:48.954Z] Total : 23635.00 92.32 0.00 0.00 0.00 0.00 0.00 00:07:53.726 00:07:53.726 true 00:07:53.726 15:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e47d614b-63b7-49e4-b6ed-21374f9e75a8 00:07:53.726 15:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:53.986 15:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:53.986 15:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:53.986 15:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1854793 00:07:54.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.919 Nvme0n1 : 3.00 23688.33 92.53 0.00 0.00 0.00 0.00 0.00 00:07:54.919 [2024-12-09T14:40:50.147Z] =================================================================================================================== 00:07:54.920 [2024-12-09T14:40:50.148Z] Total : 23688.33 92.53 0.00 0.00 0.00 0.00 0.00 00:07:54.920 00:07:55.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.853 Nvme0n1 : 4.00 23727.75 92.69 0.00 0.00 0.00 0.00 0.00 00:07:55.853 [2024-12-09T14:40:51.081Z] =================================================================================================================== 00:07:55.853 [2024-12-09T14:40:51.081Z] Total : 23727.75 92.69 0.00 0.00 0.00 0.00 0.00 00:07:55.853 00:07:56.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.786 Nvme0n1 : 5.00 23784.60 92.91 0.00 0.00 0.00 0.00 0.00 00:07:56.786 [2024-12-09T14:40:52.014Z] =================================================================================================================== 00:07:56.786 [2024-12-09T14:40:52.014Z] Total : 23784.60 92.91 0.00 0.00 0.00 0.00 0.00 00:07:56.786 00:07:57.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.720 Nvme0n1 : 6.00 23821.50 93.05 0.00 0.00 0.00 0.00 0.00 00:07:57.720 [2024-12-09T14:40:52.948Z] =================================================================================================================== 00:07:57.720 [2024-12-09T14:40:52.948Z] Total : 23821.50 93.05 0.00 0.00 0.00 0.00 0.00 00:07:57.720 00:07:58.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.654 Nvme0n1 : 7.00 23857.57 93.19 0.00 0.00 0.00 0.00 0.00 00:07:58.654 [2024-12-09T14:40:53.882Z] =================================================================================================================== 00:07:58.654 [2024-12-09T14:40:53.882Z] Total : 23857.57 93.19 0.00 0.00 0.00 0.00 0.00 00:07:58.654 00:08:00.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.028 Nvme0n1 : 8.00 23882.00 93.29 0.00 0.00 0.00 0.00 0.00 00:08:00.028 [2024-12-09T14:40:55.256Z] =================================================================================================================== 00:08:00.028 [2024-12-09T14:40:55.256Z] Total : 23882.00 93.29 0.00 0.00 0.00 0.00 0.00 00:08:00.028 00:08:00.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.961 Nvme0n1 : 9.00 23902.67 93.37 0.00 0.00 0.00 0.00 0.00 00:08:00.961 [2024-12-09T14:40:56.189Z] =================================================================================================================== 00:08:00.961 [2024-12-09T14:40:56.189Z] Total : 23902.67 93.37 0.00 0.00 0.00 0.00 0.00 00:08:00.961 00:08:01.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.896 Nvme0n1 : 10.00 23919.20 93.43 0.00 0.00 0.00 0.00 0.00 00:08:01.896 [2024-12-09T14:40:57.124Z] =================================================================================================================== 00:08:01.896 [2024-12-09T14:40:57.124Z] Total : 23919.20 93.43 0.00 0.00 0.00 0.00 0.00 00:08:01.896 00:08:01.896 00:08:01.896 Latency(us) 00:08:01.896 [2024-12-09T14:40:57.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.896 Nvme0n1 : 10.00 23920.60 93.44 0.00 0.00 5348.04 3198.78 10485.76 00:08:01.896 [2024-12-09T14:40:57.124Z] =================================================================================================================== 00:08:01.896 [2024-12-09T14:40:57.124Z] Total : 23920.60 93.44 0.00 0.00 5348.04 3198.78 10485.76 00:08:01.896 { 00:08:01.896 "results": [ 00:08:01.896 { 00:08:01.896 "job": "Nvme0n1", 00:08:01.896 "core_mask": "0x2", 00:08:01.896 "workload": "randwrite", 00:08:01.896 "status": "finished", 00:08:01.896 "queue_depth": 128, 00:08:01.896 "io_size": 4096, 00:08:01.896 "runtime": 10.004765, 00:08:01.896 "iops": 23920.60183322647, 00:08:01.896 "mibps": 93.43985091104089, 00:08:01.896 "io_failed": 0, 00:08:01.896 "io_timeout": 0, 00:08:01.896 "avg_latency_us": 5348.036420906855, 00:08:01.896 "min_latency_us": 3198.7809523809524, 00:08:01.896 "max_latency_us": 10485.76 00:08:01.896 } 00:08:01.896 ], 00:08:01.896 "core_count": 1 00:08:01.896 } 00:08:01.896 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1854555 00:08:01.896 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1854555 ']' 00:08:01.896 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1854555 00:08:01.896 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:01.896 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.896 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1854555 00:08:01.896 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:01.896 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:01.896 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1854555' 00:08:01.896 killing process with pid 1854555 00:08:01.896 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1854555 00:08:01.896 Received shutdown signal, test time was about 10.000000 seconds 00:08:01.896 00:08:01.896 Latency(us) 00:08:01.896 [2024-12-09T14:40:57.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.896 [2024-12-09T14:40:57.124Z] =================================================================================================================== 00:08:01.896 [2024-12-09T14:40:57.124Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:01.896 15:40:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1854555 00:08:01.896 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.154 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:02.412 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e47d614b-63b7-49e4-b6ed-21374f9e75a8 00:08:02.412 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:02.670 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:02.671 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:02.671 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1851481 00:08:02.671 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1851481 00:08:02.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1851481 Killed "${NVMF_APP[@]}" "$@" 00:08:02.671 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:02.671 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:02.671 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:02.671 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:02.671 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:02.671 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1856626 00:08:02.671 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1856626 00:08:02.671 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:02.671 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1856626 ']' 00:08:02.671 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.671 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.671 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.671 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.671 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:02.671 [2024-12-09 15:40:57.770859] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:02.671 [2024-12-09 15:40:57.770906] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.671 [2024-12-09 15:40:57.849845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.671 [2024-12-09 15:40:57.888679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.671 [2024-12-09 15:40:57.888715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.671 [2024-12-09 15:40:57.888723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.671 [2024-12-09 15:40:57.888729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.671 [2024-12-09 15:40:57.888733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.671 [2024-12-09 15:40:57.889263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.929 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.929 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:02.929 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:02.929 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:02.929 15:40:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:02.929 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.929 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:03.188 [2024-12-09 15:40:58.195744] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:03.188 [2024-12-09 15:40:58.195824] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:03.188 [2024-12-09 15:40:58.195848] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:03.188 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:03.188 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b64a7488-792d-426d-ab4d-2330c80e9caf 00:08:03.188 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b64a7488-792d-426d-ab4d-2330c80e9caf 00:08:03.188 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:03.188 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:03.188 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:03.188 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:03.188 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:03.188 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b64a7488-792d-426d-ab4d-2330c80e9caf -t 2000 00:08:03.446 [ 00:08:03.446 { 00:08:03.446 "name": "b64a7488-792d-426d-ab4d-2330c80e9caf", 00:08:03.446 "aliases": [ 00:08:03.446 "lvs/lvol" 00:08:03.446 ], 00:08:03.446 "product_name": "Logical Volume", 00:08:03.446 "block_size": 4096, 00:08:03.446 "num_blocks": 38912, 00:08:03.446 "uuid": "b64a7488-792d-426d-ab4d-2330c80e9caf", 00:08:03.446 "assigned_rate_limits": { 00:08:03.446 "rw_ios_per_sec": 0, 00:08:03.446 "rw_mbytes_per_sec": 0, 00:08:03.446 "r_mbytes_per_sec": 0, 00:08:03.446 "w_mbytes_per_sec": 0 00:08:03.446 }, 00:08:03.446 "claimed": false, 00:08:03.446 "zoned": false, 00:08:03.446 "supported_io_types": { 00:08:03.446 "read": true, 00:08:03.446 "write": true, 00:08:03.446 "unmap": true, 00:08:03.446 "flush": false, 00:08:03.446 "reset": true, 00:08:03.446 "nvme_admin": false, 00:08:03.446 "nvme_io": false, 00:08:03.446 "nvme_io_md": false, 00:08:03.446 "write_zeroes": true, 00:08:03.446 "zcopy": false, 00:08:03.446 "get_zone_info": false, 00:08:03.446 "zone_management": false, 00:08:03.446 "zone_append": false, 00:08:03.446 "compare": false, 00:08:03.446 "compare_and_write": false, 00:08:03.446 "abort": false, 00:08:03.446 "seek_hole": true, 00:08:03.446 "seek_data": true, 00:08:03.446 "copy": false, 00:08:03.446 "nvme_iov_md": false 00:08:03.446 }, 00:08:03.446 "driver_specific": { 00:08:03.446 "lvol": { 00:08:03.446 "lvol_store_uuid": "e47d614b-63b7-49e4-b6ed-21374f9e75a8", 00:08:03.446 "base_bdev": "aio_bdev", 00:08:03.446 "thin_provision": false, 00:08:03.446 "num_allocated_clusters": 38, 00:08:03.446 "snapshot": false, 00:08:03.446 "clone": false, 00:08:03.446 "esnap_clone": false 00:08:03.446 } 00:08:03.446 } 00:08:03.446 } 00:08:03.446 ] 00:08:03.446 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:03.446 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e47d614b-63b7-49e4-b6ed-21374f9e75a8 00:08:03.446 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:03.705 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:03.705 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e47d614b-63b7-49e4-b6ed-21374f9e75a8 00:08:03.705 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:03.963 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:03.963 15:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:03.963 [2024-12-09 15:40:59.140539] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:03.963 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e47d614b-63b7-49e4-b6ed-21374f9e75a8 00:08:03.963 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:03.963 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e47d614b-63b7-49e4-b6ed-21374f9e75a8 00:08:03.963 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:03.963 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.963 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:03.963 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.963 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:03.963 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.963 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:03.963 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:03.963 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e47d614b-63b7-49e4-b6ed-21374f9e75a8 00:08:04.222 request: 00:08:04.222 { 00:08:04.222 "uuid": "e47d614b-63b7-49e4-b6ed-21374f9e75a8", 00:08:04.222 "method": "bdev_lvol_get_lvstores", 00:08:04.222 "req_id": 1 00:08:04.222 } 00:08:04.222 Got JSON-RPC error response 00:08:04.222 response: 00:08:04.222 { 00:08:04.222 "code": -19, 00:08:04.222 "message": "No such device" 00:08:04.222 } 00:08:04.222 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:04.222 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:04.222 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:04.222 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:04.222 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:04.481 aio_bdev 00:08:04.481 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b64a7488-792d-426d-ab4d-2330c80e9caf 00:08:04.481 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b64a7488-792d-426d-ab4d-2330c80e9caf 00:08:04.481 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:04.481 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:04.481 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:04.481 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:04.481 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:04.739 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b64a7488-792d-426d-ab4d-2330c80e9caf -t 2000 00:08:04.739 [ 00:08:04.739 { 00:08:04.739 "name": "b64a7488-792d-426d-ab4d-2330c80e9caf", 00:08:04.739 "aliases": [ 00:08:04.739 "lvs/lvol" 00:08:04.739 ], 00:08:04.739 "product_name": "Logical Volume", 00:08:04.739 "block_size": 4096, 00:08:04.739 "num_blocks": 38912, 00:08:04.739 "uuid": "b64a7488-792d-426d-ab4d-2330c80e9caf", 00:08:04.739 "assigned_rate_limits": { 00:08:04.739 "rw_ios_per_sec": 0, 00:08:04.739 "rw_mbytes_per_sec": 0, 00:08:04.739 "r_mbytes_per_sec": 0, 00:08:04.739 "w_mbytes_per_sec": 0 00:08:04.739 }, 00:08:04.739 "claimed": false, 00:08:04.739 "zoned": false, 00:08:04.739 "supported_io_types": { 00:08:04.739 "read": true, 00:08:04.739 "write": true, 00:08:04.739 "unmap": true, 00:08:04.739 "flush": false, 00:08:04.739 "reset": true, 00:08:04.739 "nvme_admin": false, 00:08:04.739 "nvme_io": false, 00:08:04.739 "nvme_io_md": false, 00:08:04.739 "write_zeroes": true, 00:08:04.739 "zcopy": false, 00:08:04.739 "get_zone_info": false, 00:08:04.739 "zone_management": false, 00:08:04.739 "zone_append": false, 00:08:04.739 "compare": false, 00:08:04.739 "compare_and_write": false, 00:08:04.739 "abort": false, 00:08:04.739 "seek_hole": true, 00:08:04.739 "seek_data": true, 00:08:04.739 "copy": false, 00:08:04.739 "nvme_iov_md": false 00:08:04.739 }, 00:08:04.739 "driver_specific": { 00:08:04.739 "lvol": { 00:08:04.739 "lvol_store_uuid": "e47d614b-63b7-49e4-b6ed-21374f9e75a8", 00:08:04.739 "base_bdev": "aio_bdev", 00:08:04.739 "thin_provision": false, 00:08:04.739 "num_allocated_clusters": 38, 00:08:04.739 "snapshot": false, 00:08:04.739 "clone": false, 00:08:04.739 "esnap_clone": false 00:08:04.739 } 00:08:04.739 } 00:08:04.739 } 00:08:04.739 ] 00:08:04.739 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:04.739 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e47d614b-63b7-49e4-b6ed-21374f9e75a8 00:08:04.739 15:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:04.998 15:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:04.998 15:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e47d614b-63b7-49e4-b6ed-21374f9e75a8 00:08:04.998 15:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:05.256 15:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:05.256 15:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b64a7488-792d-426d-ab4d-2330c80e9caf 00:08:05.515 15:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e47d614b-63b7-49e4-b6ed-21374f9e75a8 00:08:05.515 15:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:05.773 15:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:05.773 00:08:05.773 real 0m17.577s 00:08:05.773 user 0m45.062s 00:08:05.773 sys 0m3.790s 00:08:05.773 15:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.773 15:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:05.773 ************************************ 00:08:05.773 END TEST lvs_grow_dirty 00:08:05.773 ************************************ 00:08:06.032 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:06.032 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:06.032 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:06.032 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:06.032 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:06.033 nvmf_trace.0 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:06.033 rmmod nvme_tcp 00:08:06.033 rmmod nvme_fabrics 00:08:06.033 rmmod nvme_keyring 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1856626 ']' 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1856626 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1856626 ']' 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1856626 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1856626 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1856626' 00:08:06.033 killing process with pid 1856626 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1856626 00:08:06.033 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1856626 00:08:06.292 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:06.292 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:06.292 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:06.292 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:06.292 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:06.292 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:06.292 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:06.292 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:06.292 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:06.292 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.292 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.292 15:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.198 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:08.198 00:08:08.198 real 0m42.282s 00:08:08.198 user 1m5.858s 00:08:08.198 sys 0m10.049s 00:08:08.198 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.198 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.198 ************************************ 00:08:08.198 END TEST nvmf_lvs_grow 00:08:08.198 ************************************ 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.458 ************************************ 00:08:08.458 START TEST nvmf_bdev_io_wait 00:08:08.458 ************************************ 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:08.458 * Looking for test storage... 00:08:08.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:08.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.458 --rc genhtml_branch_coverage=1 00:08:08.458 --rc genhtml_function_coverage=1 00:08:08.458 --rc genhtml_legend=1 00:08:08.458 --rc geninfo_all_blocks=1 00:08:08.458 --rc geninfo_unexecuted_blocks=1 00:08:08.458 00:08:08.458 ' 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:08.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.458 --rc genhtml_branch_coverage=1 00:08:08.458 --rc genhtml_function_coverage=1 00:08:08.458 --rc genhtml_legend=1 00:08:08.458 --rc geninfo_all_blocks=1 00:08:08.458 --rc geninfo_unexecuted_blocks=1 00:08:08.458 00:08:08.458 ' 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:08.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.458 --rc genhtml_branch_coverage=1 00:08:08.458 --rc genhtml_function_coverage=1 00:08:08.458 --rc genhtml_legend=1 00:08:08.458 --rc geninfo_all_blocks=1 00:08:08.458 --rc geninfo_unexecuted_blocks=1 00:08:08.458 00:08:08.458 ' 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:08.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.458 --rc genhtml_branch_coverage=1 00:08:08.458 --rc genhtml_function_coverage=1 00:08:08.458 --rc genhtml_legend=1 00:08:08.458 --rc geninfo_all_blocks=1 00:08:08.458 --rc geninfo_unexecuted_blocks=1 00:08:08.458 00:08:08.458 ' 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.458 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:08.459 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.718 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.718 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.718 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:08.718 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:08.718 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:08.718 15:41:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:15.287 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.287 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:15.288 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:15.288 Found net devices under 0000:af:00.0: cvl_0_0 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:15.288 Found net devices under 0000:af:00.1: cvl_0_1 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:15.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:08:15.288 00:08:15.288 --- 10.0.0.2 ping statistics --- 00:08:15.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.288 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:08:15.288 00:08:15.288 --- 10.0.0.1 ping statistics --- 00:08:15.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.288 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1860851 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1860851 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1860851 ']' 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.288 [2024-12-09 15:41:09.710710] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:15.288 [2024-12-09 15:41:09.710759] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.288 [2024-12-09 15:41:09.790058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.288 [2024-12-09 15:41:09.831720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.288 [2024-12-09 15:41:09.831756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.288 [2024-12-09 15:41:09.831763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.288 [2024-12-09 15:41:09.831769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.288 [2024-12-09 15:41:09.831774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.288 [2024-12-09 15:41:09.833261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.288 [2024-12-09 15:41:09.833374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.288 [2024-12-09 15:41:09.833479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.288 [2024-12-09 15:41:09.833480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:15.288 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.289 [2024-12-09 15:41:09.964479] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.289 Malloc0 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.289 15:41:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.289 [2024-12-09 15:41:10.019654] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1860884 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1860886 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:15.289 { 00:08:15.289 "params": { 00:08:15.289 "name": "Nvme$subsystem", 00:08:15.289 "trtype": "$TEST_TRANSPORT", 00:08:15.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.289 "adrfam": "ipv4", 00:08:15.289 "trsvcid": "$NVMF_PORT", 00:08:15.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.289 "hdgst": ${hdgst:-false}, 00:08:15.289 "ddgst": ${ddgst:-false} 00:08:15.289 }, 00:08:15.289 "method": "bdev_nvme_attach_controller" 00:08:15.289 } 00:08:15.289 EOF 00:08:15.289 )") 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1860888 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:15.289 { 00:08:15.289 "params": { 00:08:15.289 "name": "Nvme$subsystem", 00:08:15.289 "trtype": "$TEST_TRANSPORT", 00:08:15.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.289 "adrfam": "ipv4", 00:08:15.289 "trsvcid": "$NVMF_PORT", 00:08:15.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.289 "hdgst": ${hdgst:-false}, 00:08:15.289 "ddgst": ${ddgst:-false} 00:08:15.289 }, 00:08:15.289 "method": "bdev_nvme_attach_controller" 00:08:15.289 } 00:08:15.289 EOF 00:08:15.289 )") 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1860891 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:15.289 { 00:08:15.289 "params": { 00:08:15.289 "name": "Nvme$subsystem", 00:08:15.289 "trtype": "$TEST_TRANSPORT", 00:08:15.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.289 "adrfam": "ipv4", 00:08:15.289 "trsvcid": "$NVMF_PORT", 00:08:15.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.289 "hdgst": ${hdgst:-false}, 00:08:15.289 "ddgst": ${ddgst:-false} 00:08:15.289 }, 00:08:15.289 "method": "bdev_nvme_attach_controller" 00:08:15.289 } 00:08:15.289 EOF 00:08:15.289 )") 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:15.289 { 00:08:15.289 "params": { 00:08:15.289 "name": "Nvme$subsystem", 00:08:15.289 "trtype": "$TEST_TRANSPORT", 00:08:15.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.289 "adrfam": "ipv4", 00:08:15.289 "trsvcid": "$NVMF_PORT", 00:08:15.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.289 "hdgst": ${hdgst:-false}, 00:08:15.289 "ddgst": ${ddgst:-false} 00:08:15.289 }, 00:08:15.289 "method": "bdev_nvme_attach_controller" 00:08:15.289 } 00:08:15.289 EOF 00:08:15.289 )") 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1860884 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:15.289 "params": { 00:08:15.289 "name": "Nvme1", 00:08:15.289 "trtype": "tcp", 00:08:15.289 "traddr": "10.0.0.2", 00:08:15.289 "adrfam": "ipv4", 00:08:15.289 "trsvcid": "4420", 00:08:15.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:15.289 "hdgst": false, 00:08:15.289 "ddgst": false 00:08:15.289 }, 00:08:15.289 "method": "bdev_nvme_attach_controller" 00:08:15.289 }' 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:15.289 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:15.289 "params": { 00:08:15.289 "name": "Nvme1", 00:08:15.289 "trtype": "tcp", 00:08:15.289 "traddr": "10.0.0.2", 00:08:15.289 "adrfam": "ipv4", 00:08:15.289 "trsvcid": "4420", 00:08:15.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:15.289 "hdgst": false, 00:08:15.289 "ddgst": false 00:08:15.289 }, 00:08:15.289 "method": "bdev_nvme_attach_controller" 00:08:15.290 }' 00:08:15.290 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:15.290 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:15.290 "params": { 00:08:15.290 "name": "Nvme1", 00:08:15.290 "trtype": "tcp", 00:08:15.290 "traddr": "10.0.0.2", 00:08:15.290 "adrfam": "ipv4", 00:08:15.290 "trsvcid": "4420", 00:08:15.290 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.290 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:15.290 "hdgst": false, 00:08:15.290 "ddgst": false 00:08:15.290 }, 00:08:15.290 "method": "bdev_nvme_attach_controller" 00:08:15.290 }' 00:08:15.290 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:15.290 15:41:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:15.290 "params": { 00:08:15.290 "name": "Nvme1", 00:08:15.290 "trtype": "tcp", 00:08:15.290 "traddr": "10.0.0.2", 00:08:15.290 "adrfam": "ipv4", 00:08:15.290 "trsvcid": "4420", 00:08:15.290 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.290 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:15.290 "hdgst": false, 00:08:15.290 "ddgst": false 00:08:15.290 }, 00:08:15.290 "method": "bdev_nvme_attach_controller" 00:08:15.290 }' 00:08:15.290 [2024-12-09 15:41:10.066033] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:15.290 [2024-12-09 15:41:10.066087] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:15.290 [2024-12-09 15:41:10.072064] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:15.290 [2024-12-09 15:41:10.072105] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:15.290 [2024-12-09 15:41:10.074077] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:15.290 [2024-12-09 15:41:10.074117] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:15.290 [2024-12-09 15:41:10.074258] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:15.290 [2024-12-09 15:41:10.074296] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:15.290 [2024-12-09 15:41:10.241705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.290 [2024-12-09 15:41:10.288241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:15.290 [2024-12-09 15:41:10.339558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.290 [2024-12-09 15:41:10.384124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:15.290 [2024-12-09 15:41:10.437737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.290 [2024-12-09 15:41:10.482197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:15.547 [2024-12-09 15:41:10.538827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.547 [2024-12-09 15:41:10.589007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:15.547 Running I/O for 1 seconds... 00:08:15.547 Running I/O for 1 seconds... 00:08:15.547 Running I/O for 1 seconds... 00:08:15.547 Running I/O for 1 seconds... 00:08:16.479 7792.00 IOPS, 30.44 MiB/s 00:08:16.479 Latency(us) 00:08:16.479 [2024-12-09T14:41:11.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.479 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:16.479 Nvme1n1 : 1.02 7773.71 30.37 0.00 0.00 16281.86 6428.77 26339.23 00:08:16.479 [2024-12-09T14:41:11.707Z] =================================================================================================================== 00:08:16.479 [2024-12-09T14:41:11.707Z] Total : 7773.71 30.37 0.00 0.00 16281.86 6428.77 26339.23 00:08:16.479 12161.00 IOPS, 47.50 MiB/s 00:08:16.479 Latency(us) 00:08:16.479 [2024-12-09T14:41:11.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.479 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:16.479 Nvme1n1 : 1.01 12219.43 47.73 0.00 0.00 10440.33 4868.39 19598.38 00:08:16.479 [2024-12-09T14:41:11.707Z] =================================================================================================================== 00:08:16.479 [2024-12-09T14:41:11.707Z] Total : 12219.43 47.73 0.00 0.00 10440.33 4868.39 19598.38 00:08:16.737 7699.00 IOPS, 30.07 MiB/s 00:08:16.737 Latency(us) 00:08:16.737 [2024-12-09T14:41:11.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.737 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:16.737 Nvme1n1 : 1.01 7795.13 30.45 0.00 0.00 16379.36 3651.29 36450.50 00:08:16.737 [2024-12-09T14:41:11.965Z] =================================================================================================================== 00:08:16.737 [2024-12-09T14:41:11.965Z] Total : 7795.13 30.45 0.00 0.00 16379.36 3651.29 36450.50 00:08:16.737 243176.00 IOPS, 949.91 MiB/s 00:08:16.737 Latency(us) 00:08:16.737 [2024-12-09T14:41:11.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.737 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:16.737 Nvme1n1 : 1.00 242796.25 948.42 0.00 0.00 524.83 226.26 1529.17 00:08:16.737 [2024-12-09T14:41:11.965Z] =================================================================================================================== 00:08:16.737 [2024-12-09T14:41:11.965Z] Total : 242796.25 948.42 0.00 0.00 524.83 226.26 1529.17 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1860886 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1860888 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1860891 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:16.737 rmmod nvme_tcp 00:08:16.737 rmmod nvme_fabrics 00:08:16.737 rmmod nvme_keyring 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:16.737 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:16.738 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1860851 ']' 00:08:16.738 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1860851 00:08:16.738 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1860851 ']' 00:08:16.738 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1860851 00:08:16.738 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:16.997 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.997 15:41:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1860851 00:08:16.997 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.997 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.997 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1860851' 00:08:16.997 killing process with pid 1860851 00:08:16.997 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1860851 00:08:16.997 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1860851 00:08:16.997 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:16.997 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:16.997 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:16.997 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:16.997 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:16.997 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:16.997 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:16.997 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:16.997 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:16.997 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.997 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.997 15:41:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:19.532 00:08:19.532 real 0m10.762s 00:08:19.532 user 0m16.049s 00:08:19.532 sys 0m6.188s 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.532 ************************************ 00:08:19.532 END TEST nvmf_bdev_io_wait 00:08:19.532 ************************************ 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.532 ************************************ 00:08:19.532 START TEST nvmf_queue_depth 00:08:19.532 ************************************ 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:19.532 * Looking for test storage... 00:08:19.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:19.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.532 --rc genhtml_branch_coverage=1 00:08:19.532 --rc genhtml_function_coverage=1 00:08:19.532 --rc genhtml_legend=1 00:08:19.532 --rc geninfo_all_blocks=1 00:08:19.532 --rc geninfo_unexecuted_blocks=1 00:08:19.532 00:08:19.532 ' 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:19.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.532 --rc genhtml_branch_coverage=1 00:08:19.532 --rc genhtml_function_coverage=1 00:08:19.532 --rc genhtml_legend=1 00:08:19.532 --rc geninfo_all_blocks=1 00:08:19.532 --rc geninfo_unexecuted_blocks=1 00:08:19.532 00:08:19.532 ' 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:19.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.532 --rc genhtml_branch_coverage=1 00:08:19.532 --rc genhtml_function_coverage=1 00:08:19.532 --rc genhtml_legend=1 00:08:19.532 --rc geninfo_all_blocks=1 00:08:19.532 --rc geninfo_unexecuted_blocks=1 00:08:19.532 00:08:19.532 ' 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:19.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.532 --rc genhtml_branch_coverage=1 00:08:19.532 --rc genhtml_function_coverage=1 00:08:19.532 --rc genhtml_legend=1 00:08:19.532 --rc geninfo_all_blocks=1 00:08:19.532 --rc geninfo_unexecuted_blocks=1 00:08:19.532 00:08:19.532 ' 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.532 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:19.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:19.533 15:41:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:26.103 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:26.103 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:26.103 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:26.104 Found net devices under 0000:af:00.0: cvl_0_0 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:26.104 Found net devices under 0000:af:00.1: cvl_0_1 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:26.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:08:26.104 00:08:26.104 --- 10.0.0.2 ping statistics --- 00:08:26.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.104 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:26.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:08:26.104 00:08:26.104 --- 10.0.0.1 ping statistics --- 00:08:26.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.104 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1864854 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1864854 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1864854 ']' 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:26.104 [2024-12-09 15:41:20.550679] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:26.104 [2024-12-09 15:41:20.550723] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.104 [2024-12-09 15:41:20.631272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.104 [2024-12-09 15:41:20.670547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.104 [2024-12-09 15:41:20.670581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.104 [2024-12-09 15:41:20.670588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.104 [2024-12-09 15:41:20.670594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.104 [2024-12-09 15:41:20.670599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.104 [2024-12-09 15:41:20.671117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:26.104 [2024-12-09 15:41:20.810501] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:26.104 Malloc0 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.104 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.105 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.105 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:26.105 [2024-12-09 15:41:20.860573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.105 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.105 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1864874 00:08:26.105 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:26.105 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:26.105 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1864874 /var/tmp/bdevperf.sock 00:08:26.105 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1864874 ']' 00:08:26.105 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:26.105 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.105 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:26.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:26.105 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.105 15:41:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:26.105 [2024-12-09 15:41:20.913020] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:26.105 [2024-12-09 15:41:20.913061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1864874 ] 00:08:26.105 [2024-12-09 15:41:20.968622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.105 [2024-12-09 15:41:21.007758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.105 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.105 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:26.105 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:26.105 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.105 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:26.105 NVMe0n1 00:08:26.105 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.105 15:41:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:26.105 Running I/O for 10 seconds... 00:08:28.471 11789.00 IOPS, 46.05 MiB/s [2024-12-09T14:41:24.281Z] 12072.50 IOPS, 47.16 MiB/s [2024-12-09T14:41:25.655Z] 12258.67 IOPS, 47.89 MiB/s [2024-12-09T14:41:26.586Z] 12280.00 IOPS, 47.97 MiB/s [2024-12-09T14:41:27.520Z] 12313.80 IOPS, 48.10 MiB/s [2024-12-09T14:41:28.461Z] 12373.83 IOPS, 48.34 MiB/s [2024-12-09T14:41:29.395Z] 12422.86 IOPS, 48.53 MiB/s [2024-12-09T14:41:30.329Z] 12454.50 IOPS, 48.65 MiB/s [2024-12-09T14:41:31.702Z] 12487.78 IOPS, 48.78 MiB/s [2024-12-09T14:41:31.702Z] 12475.40 IOPS, 48.73 MiB/s 00:08:36.474 Latency(us) 00:08:36.474 [2024-12-09T14:41:31.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.474 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:36.474 Verification LBA range: start 0x0 length 0x4000 00:08:36.474 NVMe0n1 : 10.05 12508.07 48.86 0.00 0.00 81615.85 14417.92 52678.46 00:08:36.474 [2024-12-09T14:41:31.702Z] =================================================================================================================== 00:08:36.474 [2024-12-09T14:41:31.702Z] Total : 12508.07 48.86 0.00 0.00 81615.85 14417.92 52678.46 00:08:36.474 { 00:08:36.474 "results": [ 00:08:36.474 { 00:08:36.474 "job": "NVMe0n1", 00:08:36.474 "core_mask": "0x1", 00:08:36.474 "workload": "verify", 00:08:36.474 "status": "finished", 00:08:36.474 "verify_range": { 00:08:36.474 "start": 0, 00:08:36.474 "length": 16384 00:08:36.474 }, 00:08:36.474 "queue_depth": 1024, 00:08:36.474 "io_size": 4096, 00:08:36.474 "runtime": 10.052953, 00:08:36.474 "iops": 12508.066037909459, 00:08:36.474 "mibps": 48.85963296058382, 00:08:36.474 "io_failed": 0, 00:08:36.474 "io_timeout": 0, 00:08:36.474 "avg_latency_us": 81615.84689097149, 00:08:36.474 "min_latency_us": 14417.92, 00:08:36.474 "max_latency_us": 52678.460952380956 00:08:36.474 } 00:08:36.474 ], 00:08:36.474 "core_count": 1 00:08:36.474 } 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1864874 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1864874 ']' 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1864874 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1864874 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1864874' 00:08:36.474 killing process with pid 1864874 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1864874 00:08:36.474 Received shutdown signal, test time was about 10.000000 seconds 00:08:36.474 00:08:36.474 Latency(us) 00:08:36.474 [2024-12-09T14:41:31.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.474 [2024-12-09T14:41:31.702Z] =================================================================================================================== 00:08:36.474 [2024-12-09T14:41:31.702Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1864874 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:36.474 rmmod nvme_tcp 00:08:36.474 rmmod nvme_fabrics 00:08:36.474 rmmod nvme_keyring 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1864854 ']' 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1864854 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1864854 ']' 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1864854 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1864854 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1864854' 00:08:36.474 killing process with pid 1864854 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1864854 00:08:36.474 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1864854 00:08:36.733 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:36.733 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:36.733 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:36.733 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:36.733 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:36.733 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:36.733 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:36.734 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:36.734 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:36.734 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.734 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.734 15:41:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.269 15:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:39.270 00:08:39.270 real 0m19.630s 00:08:39.270 user 0m22.838s 00:08:39.270 sys 0m6.105s 00:08:39.270 15:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.270 15:41:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.270 ************************************ 00:08:39.270 END TEST nvmf_queue_depth 00:08:39.270 ************************************ 00:08:39.270 15:41:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:39.270 15:41:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:39.270 15:41:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.270 15:41:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.270 ************************************ 00:08:39.270 START TEST nvmf_target_multipath 00:08:39.270 ************************************ 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:39.270 * Looking for test storage... 00:08:39.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:39.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.270 --rc genhtml_branch_coverage=1 00:08:39.270 --rc genhtml_function_coverage=1 00:08:39.270 --rc genhtml_legend=1 00:08:39.270 --rc geninfo_all_blocks=1 00:08:39.270 --rc geninfo_unexecuted_blocks=1 00:08:39.270 00:08:39.270 ' 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:39.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.270 --rc genhtml_branch_coverage=1 00:08:39.270 --rc genhtml_function_coverage=1 00:08:39.270 --rc genhtml_legend=1 00:08:39.270 --rc geninfo_all_blocks=1 00:08:39.270 --rc geninfo_unexecuted_blocks=1 00:08:39.270 00:08:39.270 ' 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:39.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.270 --rc genhtml_branch_coverage=1 00:08:39.270 --rc genhtml_function_coverage=1 00:08:39.270 --rc genhtml_legend=1 00:08:39.270 --rc geninfo_all_blocks=1 00:08:39.270 --rc geninfo_unexecuted_blocks=1 00:08:39.270 00:08:39.270 ' 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:39.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.270 --rc genhtml_branch_coverage=1 00:08:39.270 --rc genhtml_function_coverage=1 00:08:39.270 --rc genhtml_legend=1 00:08:39.270 --rc geninfo_all_blocks=1 00:08:39.270 --rc geninfo_unexecuted_blocks=1 00:08:39.270 00:08:39.270 ' 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.270 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:39.271 15:41:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:45.842 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:45.842 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:45.842 Found net devices under 0000:af:00.0: cvl_0_0 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:45.842 Found net devices under 0000:af:00.1: cvl_0_1 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.842 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.843 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.843 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:45.843 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.843 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.843 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:45.843 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:45.843 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.843 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.843 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:45.843 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:45.843 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.843 15:41:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:45.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:08:45.843 00:08:45.843 --- 10.0.0.2 ping statistics --- 00:08:45.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.843 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:08:45.843 00:08:45.843 --- 10.0.0.1 ping statistics --- 00:08:45.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.843 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:45.843 only one NIC for nvmf test 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:45.843 rmmod nvme_tcp 00:08:45.843 rmmod nvme_fabrics 00:08:45.843 rmmod nvme_keyring 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.843 15:41:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.221 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.221 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:47.221 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:47.221 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:47.221 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:47.221 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:47.221 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:47.221 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:47.221 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:47.221 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.480 00:08:47.480 real 0m8.455s 00:08:47.480 user 0m1.766s 00:08:47.480 sys 0m4.620s 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:47.480 ************************************ 00:08:47.480 END TEST nvmf_target_multipath 00:08:47.480 ************************************ 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.480 15:41:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.480 ************************************ 00:08:47.480 START TEST nvmf_zcopy 00:08:47.480 ************************************ 00:08:47.481 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:47.481 * Looking for test storage... 00:08:47.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.481 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:47.481 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:47.481 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:47.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.740 --rc genhtml_branch_coverage=1 00:08:47.740 --rc genhtml_function_coverage=1 00:08:47.740 --rc genhtml_legend=1 00:08:47.740 --rc geninfo_all_blocks=1 00:08:47.740 --rc geninfo_unexecuted_blocks=1 00:08:47.740 00:08:47.740 ' 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:47.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.740 --rc genhtml_branch_coverage=1 00:08:47.740 --rc genhtml_function_coverage=1 00:08:47.740 --rc genhtml_legend=1 00:08:47.740 --rc geninfo_all_blocks=1 00:08:47.740 --rc geninfo_unexecuted_blocks=1 00:08:47.740 00:08:47.740 ' 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:47.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.740 --rc genhtml_branch_coverage=1 00:08:47.740 --rc genhtml_function_coverage=1 00:08:47.740 --rc genhtml_legend=1 00:08:47.740 --rc geninfo_all_blocks=1 00:08:47.740 --rc geninfo_unexecuted_blocks=1 00:08:47.740 00:08:47.740 ' 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:47.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.740 --rc genhtml_branch_coverage=1 00:08:47.740 --rc genhtml_function_coverage=1 00:08:47.740 --rc genhtml_legend=1 00:08:47.740 --rc geninfo_all_blocks=1 00:08:47.740 --rc geninfo_unexecuted_blocks=1 00:08:47.740 00:08:47.740 ' 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.740 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:47.741 15:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.308 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:54.309 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:54.309 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:54.309 Found net devices under 0000:af:00.0: cvl_0_0 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:54.309 Found net devices under 0000:af:00.1: cvl_0_1 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:54.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:08:54.309 00:08:54.309 --- 10.0.0.2 ping statistics --- 00:08:54.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.309 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:08:54.309 00:08:54.309 --- 10.0.0.1 ping statistics --- 00:08:54.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.309 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1873683 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1873683 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1873683 ']' 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.309 15:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.309 [2024-12-09 15:41:48.804809] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:54.310 [2024-12-09 15:41:48.804857] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.310 [2024-12-09 15:41:48.882882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.310 [2024-12-09 15:41:48.922487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.310 [2024-12-09 15:41:48.922519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.310 [2024-12-09 15:41:48.922526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.310 [2024-12-09 15:41:48.922531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.310 [2024-12-09 15:41:48.922540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.310 [2024-12-09 15:41:48.923063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.310 [2024-12-09 15:41:49.066968] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.310 [2024-12-09 15:41:49.087133] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.310 malloc0 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:54.310 { 00:08:54.310 "params": { 00:08:54.310 "name": "Nvme$subsystem", 00:08:54.310 "trtype": "$TEST_TRANSPORT", 00:08:54.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.310 "adrfam": "ipv4", 00:08:54.310 "trsvcid": "$NVMF_PORT", 00:08:54.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.310 "hdgst": ${hdgst:-false}, 00:08:54.310 "ddgst": ${ddgst:-false} 00:08:54.310 }, 00:08:54.310 "method": "bdev_nvme_attach_controller" 00:08:54.310 } 00:08:54.310 EOF 00:08:54.310 )") 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:54.310 15:41:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:54.310 "params": { 00:08:54.310 "name": "Nvme1", 00:08:54.310 "trtype": "tcp", 00:08:54.310 "traddr": "10.0.0.2", 00:08:54.310 "adrfam": "ipv4", 00:08:54.310 "trsvcid": "4420", 00:08:54.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:54.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:54.310 "hdgst": false, 00:08:54.310 "ddgst": false 00:08:54.310 }, 00:08:54.310 "method": "bdev_nvme_attach_controller" 00:08:54.310 }' 00:08:54.310 [2024-12-09 15:41:49.174244] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:08:54.310 [2024-12-09 15:41:49.174288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1873712 ] 00:08:54.310 [2024-12-09 15:41:49.248881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.310 [2024-12-09 15:41:49.288146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.310 Running I/O for 10 seconds... 00:08:56.617 8668.00 IOPS, 67.72 MiB/s [2024-12-09T14:41:52.779Z] 8740.00 IOPS, 68.28 MiB/s [2024-12-09T14:41:53.712Z] 8786.00 IOPS, 68.64 MiB/s [2024-12-09T14:41:54.645Z] 8804.50 IOPS, 68.79 MiB/s [2024-12-09T14:41:55.578Z] 8803.00 IOPS, 68.77 MiB/s [2024-12-09T14:41:56.512Z] 8815.67 IOPS, 68.87 MiB/s [2024-12-09T14:41:57.884Z] 8817.14 IOPS, 68.88 MiB/s [2024-12-09T14:41:58.818Z] 8818.75 IOPS, 68.90 MiB/s [2024-12-09T14:41:59.752Z] 8824.33 IOPS, 68.94 MiB/s [2024-12-09T14:41:59.752Z] 8829.00 IOPS, 68.98 MiB/s 00:09:04.524 Latency(us) 00:09:04.524 [2024-12-09T14:41:59.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.524 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:04.524 Verification LBA range: start 0x0 length 0x1000 00:09:04.524 Nvme1n1 : 10.02 8827.93 68.97 0.00 0.00 14452.99 2543.42 23842.62 00:09:04.524 [2024-12-09T14:41:59.752Z] =================================================================================================================== 00:09:04.524 [2024-12-09T14:41:59.752Z] Total : 8827.93 68.97 0.00 0.00 14452.99 2543.42 23842.62 00:09:04.524 15:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1875518 00:09:04.524 15:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:04.524 15:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.524 15:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:04.524 15:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:04.524 15:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:04.524 15:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:04.524 15:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:04.524 15:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:04.524 { 00:09:04.524 "params": { 00:09:04.524 "name": "Nvme$subsystem", 00:09:04.524 "trtype": "$TEST_TRANSPORT", 00:09:04.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:04.524 "adrfam": "ipv4", 00:09:04.524 "trsvcid": "$NVMF_PORT", 00:09:04.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:04.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:04.524 "hdgst": ${hdgst:-false}, 00:09:04.524 "ddgst": ${ddgst:-false} 00:09:04.524 }, 00:09:04.524 "method": "bdev_nvme_attach_controller" 00:09:04.524 } 00:09:04.524 EOF 00:09:04.524 )") 00:09:04.524 15:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:04.524 [2024-12-09 15:41:59.649747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.524 [2024-12-09 15:41:59.649786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.524 15:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:04.524 15:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:04.524 15:41:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:04.524 "params": { 00:09:04.524 "name": "Nvme1", 00:09:04.524 "trtype": "tcp", 00:09:04.524 "traddr": "10.0.0.2", 00:09:04.524 "adrfam": "ipv4", 00:09:04.524 "trsvcid": "4420", 00:09:04.524 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:04.524 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:04.524 "hdgst": false, 00:09:04.524 "ddgst": false 00:09:04.524 }, 00:09:04.524 "method": "bdev_nvme_attach_controller" 00:09:04.524 }' 00:09:04.524 [2024-12-09 15:41:59.661747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.524 [2024-12-09 15:41:59.661761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.524 [2024-12-09 15:41:59.673775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.524 [2024-12-09 15:41:59.673787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.524 [2024-12-09 15:41:59.685807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.524 [2024-12-09 15:41:59.685817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.524 [2024-12-09 15:41:59.689169] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:09:04.524 [2024-12-09 15:41:59.689213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1875518 ] 00:09:04.524 [2024-12-09 15:41:59.697838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.524 [2024-12-09 15:41:59.697849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.524 [2024-12-09 15:41:59.709870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.524 [2024-12-09 15:41:59.709880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.524 [2024-12-09 15:41:59.721902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.524 [2024-12-09 15:41:59.721913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.524 [2024-12-09 15:41:59.733937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.524 [2024-12-09 15:41:59.733947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.524 [2024-12-09 15:41:59.745967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.524 [2024-12-09 15:41:59.745977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.782 [2024-12-09 15:41:59.758017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.782 [2024-12-09 15:41:59.758038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.782 [2024-12-09 15:41:59.764023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.782 [2024-12-09 15:41:59.770037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.782 [2024-12-09 15:41:59.770050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.782 [2024-12-09 15:41:59.782068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.782 [2024-12-09 15:41:59.782083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.794096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.794113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.803929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.783 [2024-12-09 15:41:59.806130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.806144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.818173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.818192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.830200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.830223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.842239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.842258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.854276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.854290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.866298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.866315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.878322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.878335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.890587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.890605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.902616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.902634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.914649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.914665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.926679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.926692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.938711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.938721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.950739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.950749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.962779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.962793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.974810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.974824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.986840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.986850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.783 [2024-12-09 15:41:59.998870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.783 [2024-12-09 15:41:59.998880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.010925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.010946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.022950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.022968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.034975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.034986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.047011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.047022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.059043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.059058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.071079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.071091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.083106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.083116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.095140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.095152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.107172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.107183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.119208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.119231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 Running I/O for 5 seconds... 00:09:05.041 [2024-12-09 15:42:00.131245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.131262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.147607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.147628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.161528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.161548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.175120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.175139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.188742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.188763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.202513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.202533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.216139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.216159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.230395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.230415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.244361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.244381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.041 [2024-12-09 15:42:00.257963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.041 [2024-12-09 15:42:00.257989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.299 [2024-12-09 15:42:00.272048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.299 [2024-12-09 15:42:00.272072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.299 [2024-12-09 15:42:00.285680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.299 [2024-12-09 15:42:00.285703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.299 [2024-12-09 15:42:00.299028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.299 [2024-12-09 15:42:00.299050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.299 [2024-12-09 15:42:00.313108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.299 [2024-12-09 15:42:00.313130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.299 [2024-12-09 15:42:00.326739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.299 [2024-12-09 15:42:00.326760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.299 [2024-12-09 15:42:00.340599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.299 [2024-12-09 15:42:00.340620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.299 [2024-12-09 15:42:00.354378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.299 [2024-12-09 15:42:00.354398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.299 [2024-12-09 15:42:00.368641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.299 [2024-12-09 15:42:00.368662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.299 [2024-12-09 15:42:00.379433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.299 [2024-12-09 15:42:00.379453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.299 [2024-12-09 15:42:00.394108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.299 [2024-12-09 15:42:00.394129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.299 [2024-12-09 15:42:00.407763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.299 [2024-12-09 15:42:00.407784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.299 [2024-12-09 15:42:00.421280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.299 [2024-12-09 15:42:00.421300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.299 [2024-12-09 15:42:00.435339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.299 [2024-12-09 15:42:00.435359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.299 [2024-12-09 15:42:00.448921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.299 [2024-12-09 15:42:00.448941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.299 [2024-12-09 15:42:00.462766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.299 [2024-12-09 15:42:00.462786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.299 [2024-12-09 15:42:00.476423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.299 [2024-12-09 15:42:00.476444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.299 [2024-12-09 15:42:00.490296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.300 [2024-12-09 15:42:00.490317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.300 [2024-12-09 15:42:00.503975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.300 [2024-12-09 15:42:00.504001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.300 [2024-12-09 15:42:00.518070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.300 [2024-12-09 15:42:00.518098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.532022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.532044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.545792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.545812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.559513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.559534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.573464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.573484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.586976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.586996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.600801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.600821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.614721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.614741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.628718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.628738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.642556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.642576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.656608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.656628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.670267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.670291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.684404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.684424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.698076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.698097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.711794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.711814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.725848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.725867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.736867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.736888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.750950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.750970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.765093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.765113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.558 [2024-12-09 15:42:00.778876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.558 [2024-12-09 15:42:00.778900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:00.793194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:00.793215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:00.806830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:00.806850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:00.820419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:00.820439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:00.834260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:00.834280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:00.848036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:00.848056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:00.861806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:00.861827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:00.875700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:00.875720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:00.889804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:00.889823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:00.903432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:00.903451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:00.917542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:00.917562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:00.928457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:00.928478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:00.942688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:00.942708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:00.956174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:00.956193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:00.970019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:00.970038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:00.983750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:00.983769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:00.997274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:00.997294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:01.011434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:01.011454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:01.025017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:01.025037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.816 [2024-12-09 15:42:01.038628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.816 [2024-12-09 15:42:01.038648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.074 [2024-12-09 15:42:01.052591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.074 [2024-12-09 15:42:01.052612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.074 [2024-12-09 15:42:01.066173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.074 [2024-12-09 15:42:01.066194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.074 [2024-12-09 15:42:01.079991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.074 [2024-12-09 15:42:01.080012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.074 [2024-12-09 15:42:01.093541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.074 [2024-12-09 15:42:01.093561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.074 [2024-12-09 15:42:01.107725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.074 [2024-12-09 15:42:01.107745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.074 [2024-12-09 15:42:01.118635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.075 [2024-12-09 15:42:01.118656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.075 16840.00 IOPS, 131.56 MiB/s [2024-12-09T14:42:01.303Z] [2024-12-09 15:42:01.133135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.075 [2024-12-09 15:42:01.133154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.075 [2024-12-09 15:42:01.147059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.075 [2024-12-09 15:42:01.147079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.075 [2024-12-09 15:42:01.161270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.075 [2024-12-09 15:42:01.161291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.075 [2024-12-09 15:42:01.172194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.075 [2024-12-09 15:42:01.172214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.075 [2024-12-09 15:42:01.186265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.075 [2024-12-09 15:42:01.186285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.075 [2024-12-09 15:42:01.199438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.075 [2024-12-09 15:42:01.199467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.075 [2024-12-09 15:42:01.213943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.075 [2024-12-09 15:42:01.213968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.075 [2024-12-09 15:42:01.225197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.075 [2024-12-09 15:42:01.225222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.075 [2024-12-09 15:42:01.239334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.075 [2024-12-09 15:42:01.239354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.075 [2024-12-09 15:42:01.252793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.075 [2024-12-09 15:42:01.252812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.075 [2024-12-09 15:42:01.266399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.075 [2024-12-09 15:42:01.266420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.075 [2024-12-09 15:42:01.279855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.075 [2024-12-09 15:42:01.279875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.075 [2024-12-09 15:42:01.293854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.075 [2024-12-09 15:42:01.293879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.307560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.307583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.321816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.321837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.335755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.335775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.349670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.349690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.363433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.363453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.377326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.377347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.391291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.391310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.405268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.405289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.419432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.419451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.429899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.429918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.444370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.444389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.458161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.458181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.471858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.471878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.485897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.485917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.499728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.499748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.513525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.513545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.526947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.526966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.540666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.540686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.333 [2024-12-09 15:42:01.554483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.333 [2024-12-09 15:42:01.554503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.591 [2024-12-09 15:42:01.568140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.591 [2024-12-09 15:42:01.568161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.581997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.582017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.595865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.595884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.609831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.609850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.623866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.623885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.637527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.637547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.651537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.651558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.665355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.665375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.679690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.679711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.695418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.695439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.709029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.709049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.722735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.722757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.736493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.736513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.750585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.750605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.764319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.764339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.778236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.778257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.791642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.791662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.805589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.805619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.592 [2024-12-09 15:42:01.819406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.592 [2024-12-09 15:42:01.819427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.849 [2024-12-09 15:42:01.833278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.849 [2024-12-09 15:42:01.833300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.849 [2024-12-09 15:42:01.846794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.849 [2024-12-09 15:42:01.846816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.849 [2024-12-09 15:42:01.860777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.850 [2024-12-09 15:42:01.860798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.850 [2024-12-09 15:42:01.874535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.850 [2024-12-09 15:42:01.874556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.850 [2024-12-09 15:42:01.888313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.850 [2024-12-09 15:42:01.888333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.850 [2024-12-09 15:42:01.901805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.850 [2024-12-09 15:42:01.901826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.850 [2024-12-09 15:42:01.915843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.850 [2024-12-09 15:42:01.915863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.850 [2024-12-09 15:42:01.929698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.850 [2024-12-09 15:42:01.929718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.850 [2024-12-09 15:42:01.943456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.850 [2024-12-09 15:42:01.943476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.850 [2024-12-09 15:42:01.957293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.850 [2024-12-09 15:42:01.957312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.850 [2024-12-09 15:42:01.970949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.850 [2024-12-09 15:42:01.970969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.850 [2024-12-09 15:42:01.984466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.850 [2024-12-09 15:42:01.984485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.850 [2024-12-09 15:42:01.998179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.850 [2024-12-09 15:42:01.998201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.850 [2024-12-09 15:42:02.011730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.850 [2024-12-09 15:42:02.011750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.850 [2024-12-09 15:42:02.025656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.850 [2024-12-09 15:42:02.025675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.850 [2024-12-09 15:42:02.039461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.850 [2024-12-09 15:42:02.039481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.850 [2024-12-09 15:42:02.053748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.850 [2024-12-09 15:42:02.053768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.850 [2024-12-09 15:42:02.067245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.850 [2024-12-09 15:42:02.067270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.081238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.081259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.095095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.095115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.108823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.108843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.122638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.122658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 16920.00 IOPS, 132.19 MiB/s [2024-12-09T14:42:02.336Z] [2024-12-09 15:42:02.136526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.136547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.150017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.150036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.163601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.163619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.177503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.177522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.191031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.191050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.204712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.204732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.218403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.218423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.232097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.232117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.245886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.245906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.260138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.260157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.274159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.274178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.287944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.287963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.301406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.301426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.315166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.315187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.108 [2024-12-09 15:42:02.328814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.108 [2024-12-09 15:42:02.328840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.342812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.342835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.356501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.356521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.370731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.370751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.384491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.384510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.397921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.397942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.411895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.411916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.425885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.425905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.439674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.439694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.453285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.453304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.467199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.467224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.481147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.481167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.492406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.492425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.506743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.506762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.520463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.520483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.534570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.534590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.548336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.548356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.562144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.562163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.576196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.576222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.366 [2024-12-09 15:42:02.590224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.366 [2024-12-09 15:42:02.590244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.624 [2024-12-09 15:42:02.604518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.624 [2024-12-09 15:42:02.604540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.624 [2024-12-09 15:42:02.618628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.624 [2024-12-09 15:42:02.618648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.624 [2024-12-09 15:42:02.632388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.624 [2024-12-09 15:42:02.632408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.624 [2024-12-09 15:42:02.646457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.624 [2024-12-09 15:42:02.646477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.624 [2024-12-09 15:42:02.660261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.624 [2024-12-09 15:42:02.660281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.624 [2024-12-09 15:42:02.674092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.624 [2024-12-09 15:42:02.674112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.624 [2024-12-09 15:42:02.687549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.624 [2024-12-09 15:42:02.687569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.624 [2024-12-09 15:42:02.701398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.624 [2024-12-09 15:42:02.701419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.624 [2024-12-09 15:42:02.715287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.624 [2024-12-09 15:42:02.715307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.624 [2024-12-09 15:42:02.729132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.624 [2024-12-09 15:42:02.729152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.624 [2024-12-09 15:42:02.742724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.624 [2024-12-09 15:42:02.742744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.624 [2024-12-09 15:42:02.756705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.624 [2024-12-09 15:42:02.756724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.624 [2024-12-09 15:42:02.770229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.624 [2024-12-09 15:42:02.770249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.624 [2024-12-09 15:42:02.783883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.624 [2024-12-09 15:42:02.783902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.624 [2024-12-09 15:42:02.798010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.624 [2024-12-09 15:42:02.798029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.624 [2024-12-09 15:42:02.811962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.624 [2024-12-09 15:42:02.811982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.624 [2024-12-09 15:42:02.825909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.624 [2024-12-09 15:42:02.825929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.625 [2024-12-09 15:42:02.839625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.625 [2024-12-09 15:42:02.839645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:02.853524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:02.853545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:02.867596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:02.867618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:02.881796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:02.881815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:02.896065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:02.896084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:02.906740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:02.906759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:02.921243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:02.921263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:02.934811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:02.934831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:02.948618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:02.948637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:02.962682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:02.962701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:02.976472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:02.976492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:02.990445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:02.990465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:03.003806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:03.003826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:03.017975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:03.017995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:03.031751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:03.031772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:03.045969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:03.045993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:03.060168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:03.060189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:03.070626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:03.070647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:03.085136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:03.085156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.883 [2024-12-09 15:42:03.098583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.883 [2024-12-09 15:42:03.098604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.141 [2024-12-09 15:42:03.112709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.141 [2024-12-09 15:42:03.112731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.141 [2024-12-09 15:42:03.126560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.126583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.142 16922.67 IOPS, 132.21 MiB/s [2024-12-09T14:42:03.370Z] [2024-12-09 15:42:03.140126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.140146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.142 [2024-12-09 15:42:03.153885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.153905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.142 [2024-12-09 15:42:03.167794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.167815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.142 [2024-12-09 15:42:03.181682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.181703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.142 [2024-12-09 15:42:03.195388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.195409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.142 [2024-12-09 15:42:03.209106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.209126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.142 [2024-12-09 15:42:03.222798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.222819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.142 [2024-12-09 15:42:03.236750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.236771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.142 [2024-12-09 15:42:03.250733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.250753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.142 [2024-12-09 15:42:03.264981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.265001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.142 [2024-12-09 15:42:03.279244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.279265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.142 [2024-12-09 15:42:03.289994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.290014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.142 [2024-12-09 15:42:03.303917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.303937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.142 [2024-12-09 15:42:03.317754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.317774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.142 [2024-12-09 15:42:03.331159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.331178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.142 [2024-12-09 15:42:03.345171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.345192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.142 [2024-12-09 15:42:03.359153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.142 [2024-12-09 15:42:03.359180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.400 [2024-12-09 15:42:03.373089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.400 [2024-12-09 15:42:03.373111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.400 [2024-12-09 15:42:03.387012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.400 [2024-12-09 15:42:03.387033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.400 [2024-12-09 15:42:03.401053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.400 [2024-12-09 15:42:03.401073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.400 [2024-12-09 15:42:03.412142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.400 [2024-12-09 15:42:03.412162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.400 [2024-12-09 15:42:03.426391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.400 [2024-12-09 15:42:03.426410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.400 [2024-12-09 15:42:03.440238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.400 [2024-12-09 15:42:03.440274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.400 [2024-12-09 15:42:03.453806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.400 [2024-12-09 15:42:03.453826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.400 [2024-12-09 15:42:03.467911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.400 [2024-12-09 15:42:03.467931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.400 [2024-12-09 15:42:03.481312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.401 [2024-12-09 15:42:03.481332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.401 [2024-12-09 15:42:03.495071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.401 [2024-12-09 15:42:03.495090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.401 [2024-12-09 15:42:03.508595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.401 [2024-12-09 15:42:03.508614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.401 [2024-12-09 15:42:03.522150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.401 [2024-12-09 15:42:03.522170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.401 [2024-12-09 15:42:03.536227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.401 [2024-12-09 15:42:03.536246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.401 [2024-12-09 15:42:03.550515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.401 [2024-12-09 15:42:03.550535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.401 [2024-12-09 15:42:03.566296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.401 [2024-12-09 15:42:03.566315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.401 [2024-12-09 15:42:03.579980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.401 [2024-12-09 15:42:03.579999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.401 [2024-12-09 15:42:03.594031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.401 [2024-12-09 15:42:03.594051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.401 [2024-12-09 15:42:03.608006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.401 [2024-12-09 15:42:03.608025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.401 [2024-12-09 15:42:03.621755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.401 [2024-12-09 15:42:03.621779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.635416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.635438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.649429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.649450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.660593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.660612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.675036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.675055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.688863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.688884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.703115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.703134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.718424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.718443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.732401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.732420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.746234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.746253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.759921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.759941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.773536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.773554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.787275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.787295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.801303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.801323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.815262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.815281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.829106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.829126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.842544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.842563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.856112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.856131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.869436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.869455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.659 [2024-12-09 15:42:03.883160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.659 [2024-12-09 15:42:03.883186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:03.897001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:03.897022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:03.910492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:03.910512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:03.924263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:03.924283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:03.937817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:03.937837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:03.951275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:03.951295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:03.965293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:03.965312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:03.979388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:03.979408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:03.993182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:03.993202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:04.006954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:04.006973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:04.020535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:04.020555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:04.034354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:04.034374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:04.048306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:04.048325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:04.062026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:04.062046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:04.075847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:04.075867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:04.089707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:04.089726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:04.103181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:04.103201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:04.117152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:04.117172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 [2024-12-09 15:42:04.131061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:04.131080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.917 16935.50 IOPS, 132.31 MiB/s [2024-12-09T14:42:04.145Z] [2024-12-09 15:42:04.145030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.917 [2024-12-09 15:42:04.145051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.159154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.159175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.173045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.173066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.187084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.187104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.200780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.200800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.214991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.215009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.228897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.228916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.242902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.242921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.256538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.256557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.270355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.270375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.284159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.284180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.298084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.298104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.312169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.312189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.325786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.325805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.339788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.339807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.353621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.353641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.367564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.367584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.381175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.381196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.176 [2024-12-09 15:42:04.394812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.176 [2024-12-09 15:42:04.394833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.408716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.408738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.422459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.422480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.436413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.436434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.450472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.450492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.464177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.464198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.477558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.477579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.491305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.491324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.504743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.504762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.518465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.518485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.532048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.532068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.545906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.545925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.559961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.559980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.571127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.571148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.585329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.585349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.598768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.598789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.612709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.612729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.626749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.626770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.640447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.640467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.434 [2024-12-09 15:42:04.654231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.434 [2024-12-09 15:42:04.654251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.668379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.668405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.682075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.682094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.696017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.696037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.710116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.710137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.723808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.723828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.737647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.737667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.751480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.751500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.765757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.765777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.779429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.779449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.792948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.792969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.807627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.807646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.822888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.822908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.837178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.837198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.851056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.851075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.864880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.864900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.878590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.878610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.892398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.892418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.906083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.906104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.692 [2024-12-09 15:42:04.920157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.692 [2024-12-09 15:42:04.920182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.968 [2024-12-09 15:42:04.931813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.968 [2024-12-09 15:42:04.931834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.968 [2024-12-09 15:42:04.945813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.968 [2024-12-09 15:42:04.945833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.968 [2024-12-09 15:42:04.959591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.968 [2024-12-09 15:42:04.959611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.968 [2024-12-09 15:42:04.973245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.968 [2024-12-09 15:42:04.973265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.968 [2024-12-09 15:42:04.987041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.968 [2024-12-09 15:42:04.987061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.968 [2024-12-09 15:42:05.000644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.968 [2024-12-09 15:42:05.000663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.968 [2024-12-09 15:42:05.014489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.968 [2024-12-09 15:42:05.014508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.968 [2024-12-09 15:42:05.028205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.968 [2024-12-09 15:42:05.028231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.969 [2024-12-09 15:42:05.042022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.969 [2024-12-09 15:42:05.042041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.969 [2024-12-09 15:42:05.055976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.969 [2024-12-09 15:42:05.055995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.969 [2024-12-09 15:42:05.069616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.969 [2024-12-09 15:42:05.069635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.969 [2024-12-09 15:42:05.083770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.969 [2024-12-09 15:42:05.083789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.969 [2024-12-09 15:42:05.097775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.969 [2024-12-09 15:42:05.097796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.969 [2024-12-09 15:42:05.111645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.969 [2024-12-09 15:42:05.111664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.969 [2024-12-09 15:42:05.125311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.969 [2024-12-09 15:42:05.125330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.969 16937.60 IOPS, 132.32 MiB/s [2024-12-09T14:42:05.197Z] [2024-12-09 15:42:05.139062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.969 [2024-12-09 15:42:05.139082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.969 00:09:09.969 Latency(us) 00:09:09.969 [2024-12-09T14:42:05.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.969 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:09.969 Nvme1n1 : 5.01 16940.86 132.35 0.00 0.00 7548.86 3495.25 17476.27 00:09:09.969 [2024-12-09T14:42:05.197Z] =================================================================================================================== 00:09:09.969 [2024-12-09T14:42:05.197Z] Total : 16940.86 132.35 0.00 0.00 7548.86 3495.25 17476.27 00:09:09.969 [2024-12-09 15:42:05.148953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.969 [2024-12-09 15:42:05.148971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.969 [2024-12-09 15:42:05.160982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.969 [2024-12-09 15:42:05.160999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.969 [2024-12-09 15:42:05.173026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.969 [2024-12-09 15:42:05.173045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.280 [2024-12-09 15:42:05.185074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.280 [2024-12-09 15:42:05.185104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.280 [2024-12-09 15:42:05.197099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.280 [2024-12-09 15:42:05.197125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.280 [2024-12-09 15:42:05.209131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.280 [2024-12-09 15:42:05.209158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.280 [2024-12-09 15:42:05.221149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.280 [2024-12-09 15:42:05.221163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.280 [2024-12-09 15:42:05.233178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.280 [2024-12-09 15:42:05.233192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.280 [2024-12-09 15:42:05.245206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.280 [2024-12-09 15:42:05.245227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.280 [2024-12-09 15:42:05.257239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.280 [2024-12-09 15:42:05.257266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.280 [2024-12-09 15:42:05.269269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.280 [2024-12-09 15:42:05.269279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.280 [2024-12-09 15:42:05.281307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.280 [2024-12-09 15:42:05.281321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.280 [2024-12-09 15:42:05.293334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.280 [2024-12-09 15:42:05.293345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.280 [2024-12-09 15:42:05.305368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.280 [2024-12-09 15:42:05.305379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1875518) - No such process 00:09:10.280 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1875518 00:09:10.281 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.281 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.281 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:10.281 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.281 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:10.281 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.281 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:10.281 delay0 00:09:10.281 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.281 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:10.281 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.281 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:10.281 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.281 15:42:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:10.281 [2024-12-09 15:42:05.453909] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:16.873 Initializing NVMe Controllers 00:09:16.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:16.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:16.873 Initialization complete. Launching workers. 00:09:16.873 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 292, failed: 8811 00:09:16.873 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 9023, failed to submit 80 00:09:16.873 success 8892, unsuccessful 131, failed 0 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:16.873 rmmod nvme_tcp 00:09:16.873 rmmod nvme_fabrics 00:09:16.873 rmmod nvme_keyring 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1873683 ']' 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1873683 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1873683 ']' 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1873683 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1873683 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1873683' 00:09:16.873 killing process with pid 1873683 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1873683 00:09:16.873 15:42:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1873683 00:09:16.873 15:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:16.873 15:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:16.873 15:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:16.873 15:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:16.873 15:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:16.873 15:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:16.873 15:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:16.873 15:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:16.873 15:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:16.873 15:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.873 15:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.873 15:42:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:19.410 00:09:19.410 real 0m31.553s 00:09:19.410 user 0m41.880s 00:09:19.410 sys 0m11.437s 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.410 ************************************ 00:09:19.410 END TEST nvmf_zcopy 00:09:19.410 ************************************ 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.410 ************************************ 00:09:19.410 START TEST nvmf_nmic 00:09:19.410 ************************************ 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:19.410 * Looking for test storage... 00:09:19.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.410 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:19.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.410 --rc genhtml_branch_coverage=1 00:09:19.410 --rc genhtml_function_coverage=1 00:09:19.410 --rc genhtml_legend=1 00:09:19.410 --rc geninfo_all_blocks=1 00:09:19.410 --rc geninfo_unexecuted_blocks=1 00:09:19.410 00:09:19.410 ' 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:19.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.411 --rc genhtml_branch_coverage=1 00:09:19.411 --rc genhtml_function_coverage=1 00:09:19.411 --rc genhtml_legend=1 00:09:19.411 --rc geninfo_all_blocks=1 00:09:19.411 --rc geninfo_unexecuted_blocks=1 00:09:19.411 00:09:19.411 ' 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:19.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.411 --rc genhtml_branch_coverage=1 00:09:19.411 --rc genhtml_function_coverage=1 00:09:19.411 --rc genhtml_legend=1 00:09:19.411 --rc geninfo_all_blocks=1 00:09:19.411 --rc geninfo_unexecuted_blocks=1 00:09:19.411 00:09:19.411 ' 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:19.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.411 --rc genhtml_branch_coverage=1 00:09:19.411 --rc genhtml_function_coverage=1 00:09:19.411 --rc genhtml_legend=1 00:09:19.411 --rc geninfo_all_blocks=1 00:09:19.411 --rc geninfo_unexecuted_blocks=1 00:09:19.411 00:09:19.411 ' 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:19.411 15:42:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:25.979 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:25.979 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.979 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:25.980 Found net devices under 0000:af:00.0: cvl_0_0 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:25.980 Found net devices under 0000:af:00.1: cvl_0_1 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:25.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:09:25.980 00:09:25.980 --- 10.0.0.2 ping statistics --- 00:09:25.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.980 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:09:25.980 00:09:25.980 --- 10.0.0.1 ping statistics --- 00:09:25.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.980 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1881571 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1881571 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1881571 ']' 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.980 [2024-12-09 15:42:20.374733] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:09:25.980 [2024-12-09 15:42:20.374783] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.980 [2024-12-09 15:42:20.452708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:25.980 [2024-12-09 15:42:20.496144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.980 [2024-12-09 15:42:20.496179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.980 [2024-12-09 15:42:20.496186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.980 [2024-12-09 15:42:20.496192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.980 [2024-12-09 15:42:20.496197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.980 [2024-12-09 15:42:20.497732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.980 [2024-12-09 15:42:20.497769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.980 [2024-12-09 15:42:20.497876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.980 [2024-12-09 15:42:20.497877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.980 [2024-12-09 15:42:20.635819] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.980 Malloc0 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.980 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.981 [2024-12-09 15:42:20.701859] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:25.981 test case1: single bdev can't be used in multiple subsystems 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.981 [2024-12-09 15:42:20.725760] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:25.981 [2024-12-09 15:42:20.725781] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:25.981 [2024-12-09 15:42:20.725789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.981 request: 00:09:25.981 { 00:09:25.981 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:25.981 "namespace": { 00:09:25.981 "bdev_name": "Malloc0", 00:09:25.981 "no_auto_visible": false, 00:09:25.981 "hide_metadata": false 00:09:25.981 }, 00:09:25.981 "method": "nvmf_subsystem_add_ns", 00:09:25.981 "req_id": 1 00:09:25.981 } 00:09:25.981 Got JSON-RPC error response 00:09:25.981 response: 00:09:25.981 { 00:09:25.981 "code": -32602, 00:09:25.981 "message": "Invalid parameters" 00:09:25.981 } 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:25.981 Adding namespace failed - expected result. 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:25.981 test case2: host connect to nvmf target in multiple paths 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.981 [2024-12-09 15:42:20.733881] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.981 15:42:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:26.913 15:42:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:27.844 15:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:27.844 15:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:27.844 15:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:27.844 15:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:27.845 15:42:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:30.366 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:30.366 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:30.366 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.366 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:30.366 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.366 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:30.366 15:42:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:30.366 [global] 00:09:30.366 thread=1 00:09:30.366 invalidate=1 00:09:30.366 rw=write 00:09:30.366 time_based=1 00:09:30.366 runtime=1 00:09:30.366 ioengine=libaio 00:09:30.366 direct=1 00:09:30.366 bs=4096 00:09:30.366 iodepth=1 00:09:30.366 norandommap=0 00:09:30.366 numjobs=1 00:09:30.366 00:09:30.366 verify_dump=1 00:09:30.366 verify_backlog=512 00:09:30.366 verify_state_save=0 00:09:30.366 do_verify=1 00:09:30.366 verify=crc32c-intel 00:09:30.366 [job0] 00:09:30.366 filename=/dev/nvme0n1 00:09:30.366 Could not set queue depth (nvme0n1) 00:09:30.366 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.366 fio-3.35 00:09:30.366 Starting 1 thread 00:09:31.296 00:09:31.296 job0: (groupid=0, jobs=1): err= 0: pid=1882535: Mon Dec 9 15:42:26 2024 00:09:31.296 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:31.296 slat (nsec): min=7268, max=40292, avg=8305.69, stdev=1519.24 00:09:31.296 clat (usec): min=144, max=383, avg=182.24, stdev=22.15 00:09:31.296 lat (usec): min=163, max=391, avg=190.55, stdev=22.37 00:09:31.296 clat percentiles (usec): 00:09:31.296 | 1.00th=[ 161], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 167], 00:09:31.296 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:09:31.296 | 70.00th=[ 182], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 225], 00:09:31.296 | 99.00th=[ 262], 99.50th=[ 285], 99.90th=[ 302], 99.95th=[ 310], 00:09:31.296 | 99.99th=[ 383] 00:09:31.296 write: IOPS=2751, BW=10.7MiB/s (11.3MB/s)(10.8MiB/1001msec); 0 zone resets 00:09:31.296 slat (usec): min=10, max=27968, avg=22.30, stdev=532.72 00:09:31.296 clat (usec): min=111, max=443, avg=158.06, stdev=25.64 00:09:31.296 lat (usec): min=122, max=28202, avg=180.36, stdev=534.80 00:09:31.296 clat percentiles (usec): 00:09:31.296 | 1.00th=[ 120], 5.00th=[ 123], 10.00th=[ 125], 20.00th=[ 133], 00:09:31.296 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:09:31.296 | 70.00th=[ 163], 80.00th=[ 172], 90.00th=[ 198], 95.00th=[ 204], 00:09:31.296 | 99.00th=[ 219], 99.50th=[ 227], 99.90th=[ 253], 99.95th=[ 383], 00:09:31.296 | 99.99th=[ 445] 00:09:31.296 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:31.296 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:31.296 lat (usec) : 250=99.06%, 500=0.94% 00:09:31.296 cpu : usr=4.70%, sys=8.30%, ctx=5317, majf=0, minf=1 00:09:31.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.296 issued rwts: total=2560,2754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.296 00:09:31.296 Run status group 0 (all jobs): 00:09:31.296 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:09:31.296 WRITE: bw=10.7MiB/s (11.3MB/s), 10.7MiB/s-10.7MiB/s (11.3MB/s-11.3MB/s), io=10.8MiB (11.3MB), run=1001-1001msec 00:09:31.296 00:09:31.296 Disk stats (read/write): 00:09:31.296 nvme0n1: ios=2246/2560, merge=0/0, ticks=837/361, in_queue=1198, util=98.60% 00:09:31.554 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:31.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:31.554 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:31.554 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:31.554 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:31.554 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.554 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:31.554 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:31.811 rmmod nvme_tcp 00:09:31.811 rmmod nvme_fabrics 00:09:31.811 rmmod nvme_keyring 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1881571 ']' 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1881571 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1881571 ']' 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1881571 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1881571 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1881571' 00:09:31.811 killing process with pid 1881571 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1881571 00:09:31.811 15:42:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1881571 00:09:32.070 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:32.070 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:32.070 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:32.070 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:32.070 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:32.070 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:32.070 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:32.070 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:32.070 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:32.070 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.070 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.070 15:42:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.976 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:33.976 00:09:33.976 real 0m15.007s 00:09:33.976 user 0m33.393s 00:09:33.976 sys 0m5.350s 00:09:33.976 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.976 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.976 ************************************ 00:09:33.976 END TEST nvmf_nmic 00:09:33.976 ************************************ 00:09:34.235 15:42:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.236 ************************************ 00:09:34.236 START TEST nvmf_fio_target 00:09:34.236 ************************************ 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:34.236 * Looking for test storage... 00:09:34.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:34.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.236 --rc genhtml_branch_coverage=1 00:09:34.236 --rc genhtml_function_coverage=1 00:09:34.236 --rc genhtml_legend=1 00:09:34.236 --rc geninfo_all_blocks=1 00:09:34.236 --rc geninfo_unexecuted_blocks=1 00:09:34.236 00:09:34.236 ' 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:34.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.236 --rc genhtml_branch_coverage=1 00:09:34.236 --rc genhtml_function_coverage=1 00:09:34.236 --rc genhtml_legend=1 00:09:34.236 --rc geninfo_all_blocks=1 00:09:34.236 --rc geninfo_unexecuted_blocks=1 00:09:34.236 00:09:34.236 ' 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:34.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.236 --rc genhtml_branch_coverage=1 00:09:34.236 --rc genhtml_function_coverage=1 00:09:34.236 --rc genhtml_legend=1 00:09:34.236 --rc geninfo_all_blocks=1 00:09:34.236 --rc geninfo_unexecuted_blocks=1 00:09:34.236 00:09:34.236 ' 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:34.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.236 --rc genhtml_branch_coverage=1 00:09:34.236 --rc genhtml_function_coverage=1 00:09:34.236 --rc genhtml_legend=1 00:09:34.236 --rc geninfo_all_blocks=1 00:09:34.236 --rc geninfo_unexecuted_blocks=1 00:09:34.236 00:09:34.236 ' 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.236 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.495 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.495 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.495 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:34.496 15:42:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:41.067 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:41.067 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.067 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:41.068 Found net devices under 0000:af:00.0: cvl_0_0 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:41.068 Found net devices under 0000:af:00.1: cvl_0_1 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:41.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:09:41.068 00:09:41.068 --- 10.0.0.2 ping statistics --- 00:09:41.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.068 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:09:41.068 00:09:41.068 --- 10.0.0.1 ping statistics --- 00:09:41.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.068 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1886367 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1886367 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1886367 ']' 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.068 15:42:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.068 [2024-12-09 15:42:35.482144] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:09:41.068 [2024-12-09 15:42:35.482184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.068 [2024-12-09 15:42:35.564086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.068 [2024-12-09 15:42:35.605769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.068 [2024-12-09 15:42:35.605803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.068 [2024-12-09 15:42:35.605810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.068 [2024-12-09 15:42:35.605816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.068 [2024-12-09 15:42:35.605821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.068 [2024-12-09 15:42:35.607322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.068 [2024-12-09 15:42:35.607414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.068 [2024-12-09 15:42:35.607519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.068 [2024-12-09 15:42:35.607520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.327 15:42:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.327 15:42:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:41.327 15:42:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:41.327 15:42:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:41.327 15:42:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.327 15:42:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.327 15:42:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:41.327 [2024-12-09 15:42:36.535653] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.583 15:42:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:41.583 15:42:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:41.584 15:42:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:41.841 15:42:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:41.841 15:42:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:42.098 15:42:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:42.098 15:42:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:42.355 15:42:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:42.355 15:42:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:42.612 15:42:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:42.612 15:42:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:42.869 15:42:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:42.869 15:42:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:42.869 15:42:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:43.127 15:42:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:43.127 15:42:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:43.384 15:42:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:43.641 15:42:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:43.641 15:42:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.899 15:42:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:43.899 15:42:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:43.899 15:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.156 [2024-12-09 15:42:39.234322] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.156 15:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:44.413 15:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:44.670 15:42:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:45.602 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:45.602 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:45.602 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:45.602 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:45.602 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:45.602 15:42:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:48.125 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:48.125 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:48.125 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:48.125 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:48.125 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:48.125 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:48.125 15:42:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:48.125 [global] 00:09:48.125 thread=1 00:09:48.125 invalidate=1 00:09:48.125 rw=write 00:09:48.125 time_based=1 00:09:48.125 runtime=1 00:09:48.125 ioengine=libaio 00:09:48.125 direct=1 00:09:48.125 bs=4096 00:09:48.125 iodepth=1 00:09:48.125 norandommap=0 00:09:48.125 numjobs=1 00:09:48.125 00:09:48.125 verify_dump=1 00:09:48.125 verify_backlog=512 00:09:48.125 verify_state_save=0 00:09:48.125 do_verify=1 00:09:48.125 verify=crc32c-intel 00:09:48.125 [job0] 00:09:48.125 filename=/dev/nvme0n1 00:09:48.125 [job1] 00:09:48.125 filename=/dev/nvme0n2 00:09:48.125 [job2] 00:09:48.125 filename=/dev/nvme0n3 00:09:48.125 [job3] 00:09:48.125 filename=/dev/nvme0n4 00:09:48.125 Could not set queue depth (nvme0n1) 00:09:48.125 Could not set queue depth (nvme0n2) 00:09:48.125 Could not set queue depth (nvme0n3) 00:09:48.125 Could not set queue depth (nvme0n4) 00:09:48.125 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.125 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.125 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.125 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.125 fio-3.35 00:09:48.125 Starting 4 threads 00:09:49.496 00:09:49.496 job0: (groupid=0, jobs=1): err= 0: pid=1887708: Mon Dec 9 15:42:44 2024 00:09:49.496 read: IOPS=2169, BW=8679KiB/s (8888kB/s)(8688KiB/1001msec) 00:09:49.496 slat (nsec): min=8349, max=22980, avg=9932.69, stdev=1342.17 00:09:49.496 clat (usec): min=174, max=494, avg=232.69, stdev=22.81 00:09:49.496 lat (usec): min=185, max=505, avg=242.63, stdev=23.13 00:09:49.496 clat percentiles (usec): 00:09:49.496 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 217], 00:09:49.496 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:09:49.496 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 265], 00:09:49.496 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 461], 99.95th=[ 486], 00:09:49.496 | 99.99th=[ 494] 00:09:49.496 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:49.496 slat (nsec): min=11096, max=40155, avg=13448.72, stdev=2131.51 00:09:49.496 clat (usec): min=116, max=505, avg=165.21, stdev=18.46 00:09:49.496 lat (usec): min=128, max=519, avg=178.66, stdev=19.14 00:09:49.496 clat percentiles (usec): 00:09:49.496 | 1.00th=[ 129], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 151], 00:09:49.496 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:09:49.496 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 194], 00:09:49.496 | 99.00th=[ 210], 99.50th=[ 217], 99.90th=[ 235], 99.95th=[ 371], 00:09:49.496 | 99.99th=[ 506] 00:09:49.496 bw ( KiB/s): min=11001, max=11001, per=42.27%, avg=11001.00, stdev= 0.00, samples=1 00:09:49.496 iops : min= 2750, max= 2750, avg=2750.00, stdev= 0.00, samples=1 00:09:49.496 lat (usec) : 250=91.31%, 500=8.66%, 750=0.02% 00:09:49.496 cpu : usr=4.80%, sys=8.00%, ctx=4732, majf=0, minf=2 00:09:49.496 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.496 issued rwts: total=2172,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.496 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.496 job1: (groupid=0, jobs=1): err= 0: pid=1887709: Mon Dec 9 15:42:44 2024 00:09:49.496 read: IOPS=1853, BW=7413KiB/s (7590kB/s)(7420KiB/1001msec) 00:09:49.496 slat (nsec): min=6636, max=27814, avg=7712.74, stdev=1289.81 00:09:49.496 clat (usec): min=171, max=41851, avg=350.73, stdev=2124.26 00:09:49.496 lat (usec): min=178, max=41859, avg=358.44, stdev=2124.24 00:09:49.496 clat percentiles (usec): 00:09:49.496 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:09:49.496 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 229], 00:09:49.496 | 70.00th=[ 245], 80.00th=[ 265], 90.00th=[ 371], 95.00th=[ 375], 00:09:49.496 | 99.00th=[ 379], 99.50th=[ 388], 99.90th=[41157], 99.95th=[41681], 00:09:49.496 | 99.99th=[41681] 00:09:49.496 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:49.496 slat (nsec): min=9738, max=37772, avg=11596.37, stdev=1778.39 00:09:49.496 clat (usec): min=107, max=923, avg=146.25, stdev=32.40 00:09:49.496 lat (usec): min=118, max=934, avg=157.84, stdev=32.58 00:09:49.496 clat percentiles (usec): 00:09:49.496 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 131], 00:09:49.496 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 145], 00:09:49.496 | 70.00th=[ 149], 80.00th=[ 157], 90.00th=[ 172], 95.00th=[ 184], 00:09:49.496 | 99.00th=[ 219], 99.50th=[ 243], 99.90th=[ 627], 99.95th=[ 676], 00:09:49.496 | 99.99th=[ 922] 00:09:49.496 bw ( KiB/s): min= 8192, max= 8192, per=31.48%, avg=8192.00, stdev= 0.00, samples=1 00:09:49.496 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:49.496 lat (usec) : 250=87.37%, 500=12.40%, 750=0.08%, 1000=0.03% 00:09:49.496 lat (msec) : 50=0.13% 00:09:49.496 cpu : usr=2.50%, sys=3.60%, ctx=3906, majf=0, minf=1 00:09:49.496 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.496 issued rwts: total=1855,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.496 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.496 job2: (groupid=0, jobs=1): err= 0: pid=1887710: Mon Dec 9 15:42:44 2024 00:09:49.496 read: IOPS=1281, BW=5126KiB/s (5249kB/s)(5244KiB/1023msec) 00:09:49.496 slat (nsec): min=7173, max=45214, avg=8938.92, stdev=2234.11 00:09:49.496 clat (usec): min=177, max=41971, avg=547.49, stdev=3553.00 00:09:49.496 lat (usec): min=186, max=41994, avg=556.43, stdev=3553.98 00:09:49.496 clat percentiles (usec): 00:09:49.496 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:09:49.496 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 239], 00:09:49.496 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:09:49.496 | 99.00th=[ 310], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:09:49.496 | 99.99th=[42206] 00:09:49.496 write: IOPS=1501, BW=6006KiB/s (6150kB/s)(6144KiB/1023msec); 0 zone resets 00:09:49.496 slat (nsec): min=10012, max=37999, avg=11869.75, stdev=1914.21 00:09:49.496 clat (usec): min=128, max=681, avg=173.66, stdev=30.76 00:09:49.496 lat (usec): min=138, max=694, avg=185.53, stdev=31.20 00:09:49.496 clat percentiles (usec): 00:09:49.496 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:09:49.496 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 176], 00:09:49.496 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 208], 00:09:49.496 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 619], 99.95th=[ 685], 00:09:49.496 | 99.99th=[ 685] 00:09:49.496 bw ( KiB/s): min= 1960, max=10328, per=23.61%, avg=6144.00, stdev=5917.07, samples=2 00:09:49.496 iops : min= 490, max= 2582, avg=1536.00, stdev=1479.27, samples=2 00:09:49.496 lat (usec) : 250=81.59%, 500=17.98%, 750=0.07% 00:09:49.496 lat (msec) : 50=0.35% 00:09:49.496 cpu : usr=2.35%, sys=4.50%, ctx=2847, majf=0, minf=1 00:09:49.496 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.496 issued rwts: total=1311,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.496 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.496 job3: (groupid=0, jobs=1): err= 0: pid=1887711: Mon Dec 9 15:42:44 2024 00:09:49.496 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:09:49.496 slat (nsec): min=11978, max=29442, avg=23340.68, stdev=4735.31 00:09:49.496 clat (usec): min=40856, max=41251, avg=40979.12, stdev=86.51 00:09:49.496 lat (usec): min=40880, max=41263, avg=41002.46, stdev=84.00 00:09:49.496 clat percentiles (usec): 00:09:49.496 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:49.496 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:49.496 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:49.496 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:49.496 | 99.99th=[41157] 00:09:49.496 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:09:49.496 slat (nsec): min=11271, max=43388, avg=14487.27, stdev=3559.77 00:09:49.496 clat (usec): min=140, max=273, avg=175.27, stdev=16.51 00:09:49.496 lat (usec): min=153, max=290, avg=189.75, stdev=17.30 00:09:49.496 clat percentiles (usec): 00:09:49.496 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:09:49.496 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:09:49.496 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 202], 00:09:49.496 | 99.00th=[ 237], 99.50th=[ 243], 99.90th=[ 273], 99.95th=[ 273], 00:09:49.496 | 99.99th=[ 273] 00:09:49.496 bw ( KiB/s): min= 4096, max= 4096, per=15.74%, avg=4096.00, stdev= 0.00, samples=1 00:09:49.496 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:49.496 lat (usec) : 250=95.51%, 500=0.37% 00:09:49.496 lat (msec) : 50=4.12% 00:09:49.496 cpu : usr=0.30%, sys=1.20%, ctx=536, majf=0, minf=1 00:09:49.496 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.496 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.496 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.496 00:09:49.496 Run status group 0 (all jobs): 00:09:49.496 READ: bw=20.5MiB/s (21.5MB/s), 87.8KiB/s-8679KiB/s (89.9kB/s-8888kB/s), io=20.9MiB (22.0MB), run=1001-1023msec 00:09:49.496 WRITE: bw=25.4MiB/s (26.7MB/s), 2044KiB/s-9.99MiB/s (2093kB/s-10.5MB/s), io=26.0MiB (27.3MB), run=1001-1023msec 00:09:49.496 00:09:49.496 Disk stats (read/write): 00:09:49.496 nvme0n1: ios=1992/2048, merge=0/0, ticks=440/303, in_queue=743, util=86.17% 00:09:49.496 nvme0n2: ios=1543/1536, merge=0/0, ticks=722/216, in_queue=938, util=88.89% 00:09:49.496 nvme0n3: ios=1363/1536, merge=0/0, ticks=558/257, in_queue=815, util=94.03% 00:09:49.496 nvme0n4: ios=75/512, merge=0/0, ticks=858/81, in_queue=939, util=94.29% 00:09:49.497 15:42:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:49.497 [global] 00:09:49.497 thread=1 00:09:49.497 invalidate=1 00:09:49.497 rw=randwrite 00:09:49.497 time_based=1 00:09:49.497 runtime=1 00:09:49.497 ioengine=libaio 00:09:49.497 direct=1 00:09:49.497 bs=4096 00:09:49.497 iodepth=1 00:09:49.497 norandommap=0 00:09:49.497 numjobs=1 00:09:49.497 00:09:49.497 verify_dump=1 00:09:49.497 verify_backlog=512 00:09:49.497 verify_state_save=0 00:09:49.497 do_verify=1 00:09:49.497 verify=crc32c-intel 00:09:49.497 [job0] 00:09:49.497 filename=/dev/nvme0n1 00:09:49.497 [job1] 00:09:49.497 filename=/dev/nvme0n2 00:09:49.497 [job2] 00:09:49.497 filename=/dev/nvme0n3 00:09:49.497 [job3] 00:09:49.497 filename=/dev/nvme0n4 00:09:49.497 Could not set queue depth (nvme0n1) 00:09:49.497 Could not set queue depth (nvme0n2) 00:09:49.497 Could not set queue depth (nvme0n3) 00:09:49.497 Could not set queue depth (nvme0n4) 00:09:49.753 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.753 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.753 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.753 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.753 fio-3.35 00:09:49.753 Starting 4 threads 00:09:51.124 00:09:51.124 job0: (groupid=0, jobs=1): err= 0: pid=1888097: Mon Dec 9 15:42:45 2024 00:09:51.124 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:09:51.124 slat (nsec): min=10203, max=25528, avg=24050.73, stdev=3127.49 00:09:51.124 clat (usec): min=40805, max=41969, avg=41190.71, stdev=409.36 00:09:51.124 lat (usec): min=40815, max=41994, avg=41214.76, stdev=410.11 00:09:51.124 clat percentiles (usec): 00:09:51.124 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:51.124 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:51.124 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:09:51.124 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:51.124 | 99.99th=[42206] 00:09:51.124 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:09:51.124 slat (nsec): min=6412, max=54633, avg=11758.39, stdev=3242.62 00:09:51.124 clat (usec): min=134, max=272, avg=172.35, stdev=17.35 00:09:51.124 lat (usec): min=146, max=301, avg=184.11, stdev=18.37 00:09:51.124 clat percentiles (usec): 00:09:51.124 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:09:51.124 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:09:51.124 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 198], 00:09:51.124 | 99.00th=[ 237], 99.50th=[ 251], 99.90th=[ 273], 99.95th=[ 273], 00:09:51.124 | 99.99th=[ 273] 00:09:51.124 bw ( KiB/s): min= 4096, max= 4096, per=25.01%, avg=4096.00, stdev= 0.00, samples=1 00:09:51.124 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:51.124 lat (usec) : 250=95.32%, 500=0.56% 00:09:51.124 lat (msec) : 50=4.12% 00:09:51.124 cpu : usr=0.40%, sys=0.90%, ctx=536, majf=0, minf=1 00:09:51.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.124 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.124 job1: (groupid=0, jobs=1): err= 0: pid=1888105: Mon Dec 9 15:42:45 2024 00:09:51.124 read: IOPS=22, BW=88.4KiB/s (90.5kB/s)(92.0KiB/1041msec) 00:09:51.124 slat (nsec): min=3781, max=22518, avg=19261.83, stdev=5167.41 00:09:51.124 clat (usec): min=40539, max=41129, avg=40956.68, stdev=106.69 00:09:51.124 lat (usec): min=40542, max=41142, avg=40975.94, stdev=109.39 00:09:51.124 clat percentiles (usec): 00:09:51.124 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:51.124 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:51.124 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:51.124 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:51.124 | 99.99th=[41157] 00:09:51.124 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:09:51.124 slat (nsec): min=3427, max=35843, avg=6313.64, stdev=4179.81 00:09:51.124 clat (usec): min=126, max=274, avg=183.39, stdev=23.68 00:09:51.124 lat (usec): min=130, max=310, avg=189.71, stdev=24.62 00:09:51.124 clat percentiles (usec): 00:09:51.124 | 1.00th=[ 135], 5.00th=[ 145], 10.00th=[ 155], 20.00th=[ 165], 00:09:51.124 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:09:51.124 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 237], 00:09:51.124 | 99.00th=[ 243], 99.50th=[ 265], 99.90th=[ 277], 99.95th=[ 277], 00:09:51.124 | 99.99th=[ 277] 00:09:51.124 bw ( KiB/s): min= 4096, max= 4096, per=25.01%, avg=4096.00, stdev= 0.00, samples=1 00:09:51.124 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:51.124 lat (usec) : 250=94.95%, 500=0.75% 00:09:51.124 lat (msec) : 50=4.30% 00:09:51.124 cpu : usr=0.19%, sys=0.48%, ctx=535, majf=0, minf=1 00:09:51.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.124 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.124 job2: (groupid=0, jobs=1): err= 0: pid=1888125: Mon Dec 9 15:42:45 2024 00:09:51.124 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:51.124 slat (nsec): min=4751, max=31243, avg=6227.42, stdev=1613.62 00:09:51.124 clat (usec): min=156, max=364, avg=206.61, stdev=27.17 00:09:51.124 lat (usec): min=162, max=370, avg=212.84, stdev=27.35 00:09:51.124 clat percentiles (usec): 00:09:51.124 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:09:51.124 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:09:51.124 | 70.00th=[ 212], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 260], 00:09:51.124 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 302], 99.95th=[ 351], 00:09:51.124 | 99.99th=[ 367] 00:09:51.124 write: IOPS=2724, BW=10.6MiB/s (11.2MB/s)(10.7MiB/1001msec); 0 zone resets 00:09:51.124 slat (nsec): min=6592, max=57666, avg=8617.01, stdev=2452.48 00:09:51.124 clat (usec): min=108, max=342, avg=153.98, stdev=34.35 00:09:51.124 lat (usec): min=115, max=349, avg=162.59, stdev=34.26 00:09:51.124 clat percentiles (usec): 00:09:51.124 | 1.00th=[ 115], 5.00th=[ 120], 10.00th=[ 124], 20.00th=[ 129], 00:09:51.124 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 143], 60.00th=[ 149], 00:09:51.124 | 70.00th=[ 159], 80.00th=[ 178], 90.00th=[ 202], 95.00th=[ 237], 00:09:51.124 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 289], 99.95th=[ 297], 00:09:51.124 | 99.99th=[ 343] 00:09:51.124 bw ( KiB/s): min=12288, max=12288, per=75.02%, avg=12288.00, stdev= 0.00, samples=1 00:09:51.124 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:51.124 lat (usec) : 250=93.87%, 500=6.13% 00:09:51.124 cpu : usr=3.00%, sys=6.70%, ctx=5287, majf=0, minf=1 00:09:51.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.125 issued rwts: total=2560,2727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.125 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.125 job3: (groupid=0, jobs=1): err= 0: pid=1888132: Mon Dec 9 15:42:45 2024 00:09:51.125 read: IOPS=22, BW=91.7KiB/s (93.9kB/s)(92.0KiB/1003msec) 00:09:51.125 slat (nsec): min=10217, max=24990, avg=21667.52, stdev=3715.00 00:09:51.125 clat (usec): min=237, max=41142, avg=39170.39, stdev=8487.72 00:09:51.125 lat (usec): min=260, max=41166, avg=39192.06, stdev=8487.60 00:09:51.125 clat percentiles (usec): 00:09:51.125 | 1.00th=[ 239], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:09:51.125 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:51.125 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:51.125 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:51.125 | 99.99th=[41157] 00:09:51.125 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:09:51.125 slat (nsec): min=10019, max=34993, avg=11426.13, stdev=1964.62 00:09:51.125 clat (usec): min=136, max=288, avg=182.92, stdev=24.09 00:09:51.125 lat (usec): min=147, max=299, avg=194.34, stdev=24.20 00:09:51.125 clat percentiles (usec): 00:09:51.125 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:09:51.125 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 186], 00:09:51.125 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 212], 95.00th=[ 231], 00:09:51.125 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 289], 99.95th=[ 289], 00:09:51.125 | 99.99th=[ 289] 00:09:51.125 bw ( KiB/s): min= 4096, max= 4096, per=25.01%, avg=4096.00, stdev= 0.00, samples=1 00:09:51.125 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:51.125 lat (usec) : 250=93.08%, 500=2.80% 00:09:51.125 lat (msec) : 50=4.11% 00:09:51.125 cpu : usr=0.80%, sys=0.50%, ctx=535, majf=0, minf=1 00:09:51.125 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.125 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.125 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.125 00:09:51.125 Run status group 0 (all jobs): 00:09:51.125 READ: bw=9.86MiB/s (10.3MB/s), 87.6KiB/s-9.99MiB/s (89.8kB/s-10.5MB/s), io=10.3MiB (10.8MB), run=1001-1041msec 00:09:51.125 WRITE: bw=16.0MiB/s (16.8MB/s), 1967KiB/s-10.6MiB/s (2015kB/s-11.2MB/s), io=16.7MiB (17.5MB), run=1001-1041msec 00:09:51.125 00:09:51.125 Disk stats (read/write): 00:09:51.125 nvme0n1: ios=59/512, merge=0/0, ticks=1207/73, in_queue=1280, util=98.50% 00:09:51.125 nvme0n2: ios=67/512, merge=0/0, ticks=757/90, in_queue=847, util=87.28% 00:09:51.125 nvme0n3: ios=2104/2426, merge=0/0, ticks=438/356, in_queue=794, util=90.26% 00:09:51.125 nvme0n4: ios=76/512, merge=0/0, ticks=800/86, in_queue=886, util=94.71% 00:09:51.125 15:42:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:51.125 [global] 00:09:51.125 thread=1 00:09:51.125 invalidate=1 00:09:51.125 rw=write 00:09:51.125 time_based=1 00:09:51.125 runtime=1 00:09:51.125 ioengine=libaio 00:09:51.125 direct=1 00:09:51.125 bs=4096 00:09:51.125 iodepth=128 00:09:51.125 norandommap=0 00:09:51.125 numjobs=1 00:09:51.125 00:09:51.125 verify_dump=1 00:09:51.125 verify_backlog=512 00:09:51.125 verify_state_save=0 00:09:51.125 do_verify=1 00:09:51.125 verify=crc32c-intel 00:09:51.125 [job0] 00:09:51.125 filename=/dev/nvme0n1 00:09:51.125 [job1] 00:09:51.125 filename=/dev/nvme0n2 00:09:51.125 [job2] 00:09:51.125 filename=/dev/nvme0n3 00:09:51.125 [job3] 00:09:51.125 filename=/dev/nvme0n4 00:09:51.125 Could not set queue depth (nvme0n1) 00:09:51.125 Could not set queue depth (nvme0n2) 00:09:51.125 Could not set queue depth (nvme0n3) 00:09:51.125 Could not set queue depth (nvme0n4) 00:09:51.125 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.125 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.125 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.125 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.125 fio-3.35 00:09:51.125 Starting 4 threads 00:09:52.495 00:09:52.496 job0: (groupid=0, jobs=1): err= 0: pid=1888574: Mon Dec 9 15:42:47 2024 00:09:52.496 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:09:52.496 slat (nsec): min=1442, max=12362k, avg=79284.90, stdev=625065.87 00:09:52.496 clat (usec): min=359, max=58951, avg=11940.19, stdev=7173.79 00:09:52.496 lat (usec): min=369, max=59257, avg=12019.47, stdev=7230.63 00:09:52.496 clat percentiles (usec): 00:09:52.496 | 1.00th=[ 1004], 5.00th=[ 6128], 10.00th=[ 8160], 20.00th=[ 9241], 00:09:52.496 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[10945], 60.00th=[11338], 00:09:52.496 | 70.00th=[11863], 80.00th=[12518], 90.00th=[14615], 95.00th=[18744], 00:09:52.496 | 99.00th=[55837], 99.50th=[57934], 99.90th=[57934], 99.95th=[58983], 00:09:52.496 | 99.99th=[58983] 00:09:52.496 write: IOPS=5293, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1005msec); 0 zone resets 00:09:52.496 slat (usec): min=2, max=43224, avg=78.07, stdev=864.92 00:09:52.496 clat (usec): min=172, max=81037, avg=10916.07, stdev=6933.35 00:09:52.496 lat (usec): min=184, max=81056, avg=10994.13, stdev=7026.53 00:09:52.496 clat percentiles (usec): 00:09:52.496 | 1.00th=[ 1663], 5.00th=[ 3589], 10.00th=[ 4883], 20.00th=[ 7046], 00:09:52.496 | 30.00th=[ 8029], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[10159], 00:09:52.496 | 70.00th=[10683], 80.00th=[12911], 90.00th=[19268], 95.00th=[23725], 00:09:52.496 | 99.00th=[40109], 99.50th=[47449], 99.90th=[81265], 99.95th=[81265], 00:09:52.496 | 99.99th=[81265] 00:09:52.496 bw ( KiB/s): min=20112, max=21424, per=30.09%, avg=20768.00, stdev=927.72, samples=2 00:09:52.496 iops : min= 5028, max= 5356, avg=5192.00, stdev=231.93, samples=2 00:09:52.496 lat (usec) : 250=0.01%, 500=0.04%, 750=0.03%, 1000=0.56% 00:09:52.496 lat (msec) : 2=1.27%, 4=2.62%, 10=38.85%, 20=50.79%, 50=5.01% 00:09:52.496 lat (msec) : 100=0.82% 00:09:52.496 cpu : usr=3.59%, sys=6.67%, ctx=369, majf=0, minf=1 00:09:52.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:52.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.496 issued rwts: total=5120,5320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.496 job1: (groupid=0, jobs=1): err= 0: pid=1888599: Mon Dec 9 15:42:47 2024 00:09:52.496 read: IOPS=3408, BW=13.3MiB/s (14.0MB/s)(13.9MiB/1043msec) 00:09:52.496 slat (nsec): min=1147, max=16905k, avg=120724.24, stdev=844619.23 00:09:52.496 clat (usec): min=7057, max=60934, avg=16681.36, stdev=11098.66 00:09:52.496 lat (usec): min=7060, max=61313, avg=16802.08, stdev=11155.20 00:09:52.496 clat percentiles (usec): 00:09:52.496 | 1.00th=[ 7308], 5.00th=[ 9503], 10.00th=[ 9634], 20.00th=[ 9896], 00:09:52.496 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11076], 60.00th=[11863], 00:09:52.496 | 70.00th=[16057], 80.00th=[23987], 90.00th=[34341], 95.00th=[44827], 00:09:52.496 | 99.00th=[53216], 99.50th=[54789], 99.90th=[61080], 99.95th=[61080], 00:09:52.496 | 99.99th=[61080] 00:09:52.496 write: IOPS=3436, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1043msec); 0 zone resets 00:09:52.496 slat (nsec): min=1951, max=20853k, avg=153922.37, stdev=934308.15 00:09:52.496 clat (usec): min=1395, max=71278, avg=20255.83, stdev=14053.73 00:09:52.496 lat (usec): min=1405, max=71282, avg=20409.76, stdev=14127.24 00:09:52.496 clat percentiles (usec): 00:09:52.496 | 1.00th=[ 4752], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[ 9896], 00:09:52.496 | 30.00th=[10421], 40.00th=[11338], 50.00th=[14615], 60.00th=[19006], 00:09:52.496 | 70.00th=[22152], 80.00th=[31589], 90.00th=[42206], 95.00th=[45876], 00:09:52.496 | 99.00th=[68682], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:09:52.496 | 99.99th=[70779] 00:09:52.496 bw ( KiB/s): min=13680, max=14992, per=20.77%, avg=14336.00, stdev=927.72, samples=2 00:09:52.496 iops : min= 3420, max= 3748, avg=3584.00, stdev=231.93, samples=2 00:09:52.496 lat (msec) : 2=0.17%, 4=0.01%, 10=23.84%, 20=46.95%, 50=25.77% 00:09:52.496 lat (msec) : 100=3.25% 00:09:52.496 cpu : usr=2.21%, sys=3.84%, ctx=394, majf=0, minf=1 00:09:52.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:52.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.496 issued rwts: total=3555,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.496 job2: (groupid=0, jobs=1): err= 0: pid=1888636: Mon Dec 9 15:42:47 2024 00:09:52.496 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:09:52.496 slat (nsec): min=1196, max=12605k, avg=107909.29, stdev=725075.45 00:09:52.496 clat (usec): min=4908, max=44860, avg=13591.81, stdev=4750.63 00:09:52.496 lat (usec): min=4919, max=44879, avg=13699.72, stdev=4791.36 00:09:52.496 clat percentiles (usec): 00:09:52.496 | 1.00th=[ 6718], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10814], 00:09:52.496 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12518], 60.00th=[13435], 00:09:52.496 | 70.00th=[14484], 80.00th=[15795], 90.00th=[18744], 95.00th=[20579], 00:09:52.496 | 99.00th=[39584], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:09:52.496 | 99.99th=[44827] 00:09:52.496 write: IOPS=4889, BW=19.1MiB/s (20.0MB/s)(19.2MiB/1003msec); 0 zone resets 00:09:52.496 slat (usec): min=2, max=11916, avg=95.03, stdev=618.77 00:09:52.496 clat (usec): min=1162, max=52908, avg=13126.98, stdev=6117.27 00:09:52.496 lat (usec): min=1172, max=52917, avg=13222.01, stdev=6150.13 00:09:52.496 clat percentiles (usec): 00:09:52.496 | 1.00th=[ 4817], 5.00th=[ 7308], 10.00th=[ 8979], 20.00th=[10028], 00:09:52.496 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11863], 60.00th=[12387], 00:09:52.496 | 70.00th=[12911], 80.00th=[13566], 90.00th=[17957], 95.00th=[23725], 00:09:52.496 | 99.00th=[43254], 99.50th=[48497], 99.90th=[52691], 99.95th=[52691], 00:09:52.496 | 99.99th=[52691] 00:09:52.496 bw ( KiB/s): min=17736, max=20480, per=27.69%, avg=19108.00, stdev=1940.30, samples=2 00:09:52.496 iops : min= 4434, max= 5120, avg=4777.00, stdev=485.08, samples=2 00:09:52.496 lat (msec) : 2=0.05%, 4=0.23%, 10=15.89%, 20=77.72%, 50=5.97% 00:09:52.496 lat (msec) : 100=0.14% 00:09:52.496 cpu : usr=3.69%, sys=6.49%, ctx=420, majf=0, minf=1 00:09:52.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:52.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.496 issued rwts: total=4608,4904,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.496 job3: (groupid=0, jobs=1): err= 0: pid=1888650: Mon Dec 9 15:42:47 2024 00:09:52.496 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:09:52.496 slat (nsec): min=1097, max=20678k, avg=121518.40, stdev=876263.11 00:09:52.496 clat (usec): min=3723, max=53607, avg=15958.53, stdev=7904.88 00:09:52.496 lat (usec): min=3739, max=53659, avg=16080.05, stdev=7942.19 00:09:52.496 clat percentiles (usec): 00:09:52.496 | 1.00th=[ 8586], 5.00th=[10421], 10.00th=[10945], 20.00th=[11338], 00:09:52.496 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12911], 60.00th=[14222], 00:09:52.496 | 70.00th=[15270], 80.00th=[17171], 90.00th=[25822], 95.00th=[32900], 00:09:52.496 | 99.00th=[53740], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:09:52.496 | 99.99th=[53740] 00:09:52.496 write: IOPS=4171, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1004msec); 0 zone resets 00:09:52.496 slat (nsec): min=1978, max=17050k, avg=112263.62, stdev=778910.43 00:09:52.496 clat (usec): min=355, max=55209, avg=14809.10, stdev=9828.56 00:09:52.496 lat (usec): min=831, max=55266, avg=14921.36, stdev=9895.31 00:09:52.496 clat percentiles (usec): 00:09:52.496 | 1.00th=[ 2278], 5.00th=[ 6587], 10.00th=[ 7832], 20.00th=[ 9896], 00:09:52.496 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11731], 60.00th=[12387], 00:09:52.496 | 70.00th=[13173], 80.00th=[15008], 90.00th=[30802], 95.00th=[38011], 00:09:52.496 | 99.00th=[49021], 99.50th=[49546], 99.90th=[49546], 99.95th=[52167], 00:09:52.496 | 99.99th=[55313] 00:09:52.496 bw ( KiB/s): min=16064, max=16704, per=23.74%, avg=16384.00, stdev=452.55, samples=2 00:09:52.496 iops : min= 4016, max= 4176, avg=4096.00, stdev=113.14, samples=2 00:09:52.496 lat (usec) : 500=0.01%, 1000=0.08% 00:09:52.496 lat (msec) : 2=0.39%, 4=1.10%, 10=10.36%, 20=72.37%, 50=15.14% 00:09:52.496 lat (msec) : 100=0.56% 00:09:52.496 cpu : usr=2.09%, sys=4.69%, ctx=408, majf=0, minf=2 00:09:52.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:52.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.496 issued rwts: total=4096,4188,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.496 00:09:52.496 Run status group 0 (all jobs): 00:09:52.496 READ: bw=65.1MiB/s (68.2MB/s), 13.3MiB/s-19.9MiB/s (14.0MB/s-20.9MB/s), io=67.9MiB (71.2MB), run=1003-1043msec 00:09:52.496 WRITE: bw=67.4MiB/s (70.7MB/s), 13.4MiB/s-20.7MiB/s (14.1MB/s-21.7MB/s), io=70.3MiB (73.7MB), run=1003-1043msec 00:09:52.496 00:09:52.496 Disk stats (read/write): 00:09:52.496 nvme0n1: ios=3931/4096, merge=0/0, ticks=41927/40418, in_queue=82345, util=98.20% 00:09:52.496 nvme0n2: ios=3605/3584, merge=0/0, ticks=27620/33484, in_queue=61104, util=92.36% 00:09:52.496 nvme0n3: ios=3718/4096, merge=0/0, ticks=41722/38451, in_queue=80173, util=95.16% 00:09:52.496 nvme0n4: ios=3072/3446, merge=0/0, ticks=18414/21806, in_queue=40220, util=88.33% 00:09:52.496 15:42:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:52.496 [global] 00:09:52.496 thread=1 00:09:52.496 invalidate=1 00:09:52.496 rw=randwrite 00:09:52.496 time_based=1 00:09:52.496 runtime=1 00:09:52.496 ioengine=libaio 00:09:52.496 direct=1 00:09:52.496 bs=4096 00:09:52.496 iodepth=128 00:09:52.496 norandommap=0 00:09:52.496 numjobs=1 00:09:52.496 00:09:52.496 verify_dump=1 00:09:52.496 verify_backlog=512 00:09:52.496 verify_state_save=0 00:09:52.496 do_verify=1 00:09:52.496 verify=crc32c-intel 00:09:52.496 [job0] 00:09:52.496 filename=/dev/nvme0n1 00:09:52.496 [job1] 00:09:52.496 filename=/dev/nvme0n2 00:09:52.496 [job2] 00:09:52.496 filename=/dev/nvme0n3 00:09:52.496 [job3] 00:09:52.496 filename=/dev/nvme0n4 00:09:52.496 Could not set queue depth (nvme0n1) 00:09:52.754 Could not set queue depth (nvme0n2) 00:09:52.754 Could not set queue depth (nvme0n3) 00:09:52.754 Could not set queue depth (nvme0n4) 00:09:52.754 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.754 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.754 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.754 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.754 fio-3.35 00:09:52.754 Starting 4 threads 00:09:54.257 00:09:54.257 job0: (groupid=0, jobs=1): err= 0: pid=1889040: Mon Dec 9 15:42:49 2024 00:09:54.257 read: IOPS=6083, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1010msec) 00:09:54.257 slat (nsec): min=1367, max=9206.4k, avg=83524.32, stdev=605254.69 00:09:54.257 clat (usec): min=3681, max=19189, avg=10522.59, stdev=2338.24 00:09:54.257 lat (usec): min=3686, max=23322, avg=10606.11, stdev=2386.64 00:09:54.257 clat percentiles (usec): 00:09:54.257 | 1.00th=[ 5014], 5.00th=[ 7898], 10.00th=[ 8455], 20.00th=[ 9372], 00:09:54.257 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:09:54.257 | 70.00th=[10290], 80.00th=[11994], 90.00th=[14091], 95.00th=[15926], 00:09:54.257 | 99.00th=[17695], 99.50th=[18220], 99.90th=[18744], 99.95th=[18744], 00:09:54.257 | 99.99th=[19268] 00:09:54.257 write: IOPS=6548, BW=25.6MiB/s (26.8MB/s)(25.8MiB/1010msec); 0 zone resets 00:09:54.257 slat (usec): min=2, max=12605, avg=67.28, stdev=450.94 00:09:54.257 clat (usec): min=1533, max=26045, avg=9337.93, stdev=2402.45 00:09:54.257 lat (usec): min=1584, max=26059, avg=9405.21, stdev=2437.06 00:09:54.257 clat percentiles (usec): 00:09:54.257 | 1.00th=[ 3458], 5.00th=[ 5145], 10.00th=[ 6587], 20.00th=[ 8160], 00:09:54.257 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:09:54.257 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10290], 95.00th=[11207], 00:09:54.257 | 99.00th=[17957], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:09:54.257 | 99.99th=[26084] 00:09:54.257 bw ( KiB/s): min=24624, max=27272, per=34.50%, avg=25948.00, stdev=1872.42, samples=2 00:09:54.257 iops : min= 6156, max= 6818, avg=6487.00, stdev=468.10, samples=2 00:09:54.257 lat (msec) : 2=0.04%, 4=1.36%, 10=60.92%, 20=37.33%, 50=0.35% 00:09:54.257 cpu : usr=5.65%, sys=7.14%, ctx=628, majf=0, minf=1 00:09:54.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:54.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:54.257 issued rwts: total=6144,6614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:54.257 job1: (groupid=0, jobs=1): err= 0: pid=1889041: Mon Dec 9 15:42:49 2024 00:09:54.257 read: IOPS=3775, BW=14.7MiB/s (15.5MB/s)(14.9MiB/1008msec) 00:09:54.257 slat (nsec): min=1560, max=14348k, avg=143753.07, stdev=979448.84 00:09:54.257 clat (msec): min=4, max=121, avg=14.45, stdev=14.48 00:09:54.257 lat (msec): min=4, max=121, avg=14.59, stdev=14.66 00:09:54.257 clat percentiles (msec): 00:09:54.257 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:09:54.257 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:09:54.257 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 18], 95.00th=[ 31], 00:09:54.257 | 99.00th=[ 103], 99.50th=[ 109], 99.90th=[ 122], 99.95th=[ 122], 00:09:54.257 | 99.99th=[ 122] 00:09:54.257 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:09:54.257 slat (usec): min=2, max=11097, avg=104.55, stdev=650.00 00:09:54.257 clat (usec): min=1791, max=121002, avg=17754.06, stdev=21126.76 00:09:54.257 lat (usec): min=1804, max=121007, avg=17858.61, stdev=21212.23 00:09:54.257 clat percentiles (msec): 00:09:54.257 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:09:54.257 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:09:54.257 | 70.00th=[ 11], 80.00th=[ 21], 90.00th=[ 40], 95.00th=[ 61], 00:09:54.257 | 99.00th=[ 109], 99.50th=[ 110], 99.90th=[ 111], 99.95th=[ 111], 00:09:54.257 | 99.99th=[ 122] 00:09:54.257 bw ( KiB/s): min=12288, max=20480, per=21.78%, avg=16384.00, stdev=5792.62, samples=2 00:09:54.257 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:09:54.257 lat (msec) : 2=0.03%, 4=0.77%, 10=36.50%, 20=47.32%, 50=9.39% 00:09:54.257 lat (msec) : 100=3.96%, 250=2.04% 00:09:54.257 cpu : usr=2.78%, sys=5.66%, ctx=398, majf=0, minf=1 00:09:54.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:54.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:54.257 issued rwts: total=3806,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:54.257 job2: (groupid=0, jobs=1): err= 0: pid=1889042: Mon Dec 9 15:42:49 2024 00:09:54.257 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec) 00:09:54.257 slat (nsec): min=1160, max=11836k, avg=121013.62, stdev=827788.66 00:09:54.257 clat (usec): min=3606, max=32077, avg=14681.56, stdev=3969.38 00:09:54.257 lat (usec): min=3615, max=32080, avg=14802.57, stdev=4048.65 00:09:54.257 clat percentiles (usec): 00:09:54.257 | 1.00th=[ 6456], 5.00th=[11731], 10.00th=[11863], 20.00th=[12125], 00:09:54.257 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13173], 60.00th=[14091], 00:09:54.257 | 70.00th=[16581], 80.00th=[17433], 90.00th=[19530], 95.00th=[21627], 00:09:54.257 | 99.00th=[29492], 99.50th=[30802], 99.90th=[32113], 99.95th=[32113], 00:09:54.257 | 99.99th=[32113] 00:09:54.257 write: IOPS=3193, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1014msec); 0 zone resets 00:09:54.257 slat (usec): min=2, max=15021, avg=187.23, stdev=1105.86 00:09:54.257 clat (usec): min=1159, max=100919, avg=25797.38, stdev=19192.34 00:09:54.257 lat (usec): min=1200, max=100931, avg=25984.61, stdev=19310.23 00:09:54.257 clat percentiles (msec): 00:09:54.257 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 11], 00:09:54.257 | 30.00th=[ 12], 40.00th=[ 17], 50.00th=[ 22], 60.00th=[ 23], 00:09:54.257 | 70.00th=[ 30], 80.00th=[ 40], 90.00th=[ 51], 95.00th=[ 62], 00:09:54.257 | 99.00th=[ 94], 99.50th=[ 99], 99.90th=[ 102], 99.95th=[ 102], 00:09:54.257 | 99.99th=[ 102] 00:09:54.257 bw ( KiB/s): min=12416, max=12464, per=16.54%, avg=12440.00, stdev=33.94, samples=2 00:09:54.257 iops : min= 3104, max= 3116, avg=3110.00, stdev= 8.49, samples=2 00:09:54.257 lat (msec) : 2=0.02%, 4=0.62%, 10=4.75%, 20=61.24%, 50=27.65% 00:09:54.257 lat (msec) : 100=5.61%, 250=0.11% 00:09:54.257 cpu : usr=1.78%, sys=4.54%, ctx=293, majf=0, minf=2 00:09:54.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:54.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:54.257 issued rwts: total=3072,3238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:54.257 job3: (groupid=0, jobs=1): err= 0: pid=1889043: Mon Dec 9 15:42:49 2024 00:09:54.257 read: IOPS=4691, BW=18.3MiB/s (19.2MB/s)(18.5MiB/1009msec) 00:09:54.257 slat (nsec): min=1367, max=11693k, avg=106068.00, stdev=737720.13 00:09:54.257 clat (usec): min=1476, max=32240, avg=12760.53, stdev=3499.39 00:09:54.257 lat (usec): min=3888, max=32249, avg=12866.60, stdev=3540.05 00:09:54.257 clat percentiles (usec): 00:09:54.257 | 1.00th=[ 4752], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[10552], 00:09:54.257 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11731], 60.00th=[12387], 00:09:54.257 | 70.00th=[14222], 80.00th=[14877], 90.00th=[17433], 95.00th=[19530], 00:09:54.257 | 99.00th=[23987], 99.50th=[28181], 99.90th=[32113], 99.95th=[32113], 00:09:54.257 | 99.99th=[32113] 00:09:54.257 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:09:54.257 slat (usec): min=2, max=13395, avg=92.85, stdev=457.48 00:09:54.257 clat (usec): min=2506, max=34509, avg=13166.96, stdev=6012.65 00:09:54.257 lat (usec): min=2517, max=34517, avg=13259.81, stdev=6063.48 00:09:54.257 clat percentiles (usec): 00:09:54.257 | 1.00th=[ 3556], 5.00th=[ 5473], 10.00th=[ 7308], 20.00th=[10683], 00:09:54.257 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:09:54.257 | 70.00th=[11994], 80.00th=[15008], 90.00th=[22676], 95.00th=[27395], 00:09:54.257 | 99.00th=[32637], 99.50th=[33162], 99.90th=[34341], 99.95th=[34341], 00:09:54.257 | 99.99th=[34341] 00:09:54.257 bw ( KiB/s): min=16384, max=24560, per=27.22%, avg=20472.00, stdev=5781.31, samples=2 00:09:54.257 iops : min= 4096, max= 6140, avg=5118.00, stdev=1445.33, samples=2 00:09:54.257 lat (msec) : 2=0.01%, 4=1.04%, 10=13.32%, 20=76.24%, 50=9.39% 00:09:54.257 cpu : usr=4.17%, sys=5.46%, ctx=649, majf=0, minf=1 00:09:54.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:54.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:54.257 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:54.257 00:09:54.257 Run status group 0 (all jobs): 00:09:54.257 READ: bw=68.4MiB/s (71.7MB/s), 11.8MiB/s-23.8MiB/s (12.4MB/s-24.9MB/s), io=69.4MiB (72.7MB), run=1008-1014msec 00:09:54.257 WRITE: bw=73.5MiB/s (77.0MB/s), 12.5MiB/s-25.6MiB/s (13.1MB/s-26.8MB/s), io=74.5MiB (78.1MB), run=1008-1014msec 00:09:54.257 00:09:54.257 Disk stats (read/write): 00:09:54.257 nvme0n1: ios=5154/5507, merge=0/0, ticks=52585/49408, in_queue=101993, util=98.40% 00:09:54.257 nvme0n2: ios=3608/3903, merge=0/0, ticks=47101/55556, in_queue=102657, util=94.70% 00:09:54.257 nvme0n3: ios=2197/2560, merge=0/0, ticks=24775/52615, in_queue=77390, util=89.96% 00:09:54.257 nvme0n4: ios=3878/4096, merge=0/0, ticks=48128/55928, in_queue=104056, util=97.04% 00:09:54.257 15:42:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:54.257 15:42:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1889245 00:09:54.257 15:42:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:54.257 15:42:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:54.257 [global] 00:09:54.257 thread=1 00:09:54.257 invalidate=1 00:09:54.257 rw=read 00:09:54.257 time_based=1 00:09:54.257 runtime=10 00:09:54.257 ioengine=libaio 00:09:54.257 direct=1 00:09:54.257 bs=4096 00:09:54.257 iodepth=1 00:09:54.257 norandommap=1 00:09:54.257 numjobs=1 00:09:54.257 00:09:54.257 [job0] 00:09:54.257 filename=/dev/nvme0n1 00:09:54.258 [job1] 00:09:54.258 filename=/dev/nvme0n2 00:09:54.258 [job2] 00:09:54.258 filename=/dev/nvme0n3 00:09:54.258 [job3] 00:09:54.258 filename=/dev/nvme0n4 00:09:54.258 Could not set queue depth (nvme0n1) 00:09:54.258 Could not set queue depth (nvme0n2) 00:09:54.258 Could not set queue depth (nvme0n3) 00:09:54.258 Could not set queue depth (nvme0n4) 00:09:54.515 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.515 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.515 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.515 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.515 fio-3.35 00:09:54.515 Starting 4 threads 00:09:57.062 15:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:57.320 15:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:57.320 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=42065920, buflen=4096 00:09:57.320 fio: pid=1889422, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:57.579 15:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:57.579 15:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:57.579 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=692224, buflen=4096 00:09:57.579 fio: pid=1889421, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:57.838 15:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:57.838 15:42:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:57.838 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=5259264, buflen=4096 00:09:57.838 fio: pid=1889418, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:57.838 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=55681024, buflen=4096 00:09:57.838 fio: pid=1889419, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:57.838 15:42:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:57.838 15:42:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:57.838 00:09:57.838 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1889418: Mon Dec 9 15:42:53 2024 00:09:57.838 read: IOPS=413, BW=1651KiB/s (1691kB/s)(5136KiB/3111msec) 00:09:57.838 slat (usec): min=2, max=18848, avg=23.30, stdev=525.57 00:09:57.838 clat (usec): min=164, max=41968, avg=2380.40, stdev=9131.41 00:09:57.838 lat (usec): min=172, max=59983, avg=2403.70, stdev=9209.72 00:09:57.838 clat percentiles (usec): 00:09:57.838 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:09:57.838 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:09:57.838 | 70.00th=[ 225], 80.00th=[ 239], 90.00th=[ 277], 95.00th=[40633], 00:09:57.838 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:57.838 | 99.99th=[42206] 00:09:57.838 bw ( KiB/s): min= 112, max= 9512, per=5.54%, avg=1706.33, stdev=3824.07, samples=6 00:09:57.838 iops : min= 28, max= 2378, avg=426.50, stdev=956.06, samples=6 00:09:57.838 lat (usec) : 250=84.12%, 500=9.73%, 750=0.78% 00:09:57.838 lat (msec) : 50=5.29% 00:09:57.838 cpu : usr=0.13%, sys=0.80%, ctx=1286, majf=0, minf=1 00:09:57.838 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.838 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.838 issued rwts: total=1285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.838 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.838 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1889419: Mon Dec 9 15:42:53 2024 00:09:57.838 read: IOPS=4135, BW=16.2MiB/s (16.9MB/s)(53.1MiB/3287msec) 00:09:57.838 slat (usec): min=5, max=15504, avg=13.19, stdev=238.62 00:09:57.838 clat (usec): min=156, max=4137, avg=225.14, stdev=50.79 00:09:57.838 lat (usec): min=163, max=15979, avg=238.34, stdev=249.00 00:09:57.838 clat percentiles (usec): 00:09:57.838 | 1.00th=[ 165], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 200], 00:09:57.838 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 229], 00:09:57.838 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 265], 00:09:57.838 | 99.00th=[ 420], 99.50th=[ 449], 99.90th=[ 515], 99.95th=[ 523], 00:09:57.838 | 99.99th=[ 586] 00:09:57.838 bw ( KiB/s): min=14760, max=17984, per=54.67%, avg=16844.00, stdev=1423.63, samples=6 00:09:57.838 iops : min= 3690, max= 4496, avg=4211.00, stdev=355.91, samples=6 00:09:57.838 lat (usec) : 250=82.03%, 500=17.79%, 750=0.16% 00:09:57.838 lat (msec) : 10=0.01% 00:09:57.838 cpu : usr=2.16%, sys=5.87%, ctx=13601, majf=0, minf=1 00:09:57.838 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.838 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.838 issued rwts: total=13595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.839 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.839 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1889421: Mon Dec 9 15:42:53 2024 00:09:57.839 read: IOPS=58, BW=233KiB/s (239kB/s)(676KiB/2902msec) 00:09:57.839 slat (nsec): min=7648, max=75141, avg=14967.48, stdev=8653.47 00:09:57.839 clat (usec): min=200, max=42043, avg=17019.16, stdev=20248.44 00:09:57.839 lat (usec): min=209, max=42067, avg=17034.08, stdev=20255.28 00:09:57.839 clat percentiles (usec): 00:09:57.839 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:09:57.839 | 30.00th=[ 237], 40.00th=[ 260], 50.00th=[ 285], 60.00th=[40633], 00:09:57.839 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:57.839 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:57.839 | 99.99th=[42206] 00:09:57.839 bw ( KiB/s): min= 96, max= 880, per=0.82%, avg=254.40, stdev=349.74, samples=5 00:09:57.839 iops : min= 24, max= 220, avg=63.60, stdev=87.43, samples=5 00:09:57.839 lat (usec) : 250=34.71%, 500=22.94%, 750=1.18% 00:09:57.839 lat (msec) : 50=40.59% 00:09:57.839 cpu : usr=0.00%, sys=0.17%, ctx=172, majf=0, minf=1 00:09:57.839 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.839 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.839 issued rwts: total=170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.839 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.839 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1889422: Mon Dec 9 15:42:53 2024 00:09:57.839 read: IOPS=3802, BW=14.9MiB/s (15.6MB/s)(40.1MiB/2701msec) 00:09:57.839 slat (nsec): min=2366, max=48908, avg=8534.58, stdev=1586.90 00:09:57.839 clat (usec): min=181, max=828, avg=249.97, stdev=28.75 00:09:57.839 lat (usec): min=189, max=838, avg=258.51, stdev=28.73 00:09:57.839 clat percentiles (usec): 00:09:57.839 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 235], 00:09:57.839 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:09:57.839 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 289], 00:09:57.839 | 99.00th=[ 318], 99.50th=[ 461], 99.90th=[ 515], 99.95th=[ 578], 00:09:57.839 | 99.99th=[ 824] 00:09:57.839 bw ( KiB/s): min=14400, max=15616, per=49.64%, avg=15292.80, stdev=503.38, samples=5 00:09:57.839 iops : min= 3600, max= 3904, avg=3823.20, stdev=125.85, samples=5 00:09:57.839 lat (usec) : 250=57.72%, 500=42.04%, 750=0.21%, 1000=0.02% 00:09:57.839 cpu : usr=2.22%, sys=6.15%, ctx=10275, majf=0, minf=1 00:09:57.839 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.839 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.839 issued rwts: total=10271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.839 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.839 00:09:57.839 Run status group 0 (all jobs): 00:09:57.839 READ: bw=30.1MiB/s (31.5MB/s), 233KiB/s-16.2MiB/s (239kB/s-16.9MB/s), io=98.9MiB (104MB), run=2701-3287msec 00:09:57.839 00:09:57.839 Disk stats (read/write): 00:09:57.839 nvme0n1: ios=1284/0, merge=0/0, ticks=3043/0, in_queue=3043, util=94.92% 00:09:57.839 nvme0n2: ios=13031/0, merge=0/0, ticks=2772/0, in_queue=2772, util=94.55% 00:09:57.839 nvme0n3: ios=201/0, merge=0/0, ticks=3004/0, in_queue=3004, util=100.00% 00:09:57.839 nvme0n4: ios=9984/0, merge=0/0, ticks=3417/0, in_queue=3417, util=100.00% 00:09:58.098 15:42:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:58.098 15:42:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:58.356 15:42:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:58.356 15:42:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:58.615 15:42:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:58.615 15:42:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:58.873 15:42:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:58.873 15:42:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:58.873 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:58.873 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1889245 00:09:58.873 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:58.873 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:59.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.132 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:59.132 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:59.132 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:59.132 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.132 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:59.132 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.132 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:59.132 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:59.132 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:59.132 nvmf hotplug test: fio failed as expected 00:09:59.132 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.391 rmmod nvme_tcp 00:09:59.391 rmmod nvme_fabrics 00:09:59.391 rmmod nvme_keyring 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1886367 ']' 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1886367 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1886367 ']' 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1886367 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1886367 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1886367' 00:09:59.391 killing process with pid 1886367 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1886367 00:09:59.391 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1886367 00:09:59.651 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:59.651 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:59.651 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:59.651 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:59.651 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:59.651 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:59.651 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:59.651 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.651 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:59.651 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.651 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.651 15:42:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.189 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:02.189 00:10:02.189 real 0m27.528s 00:10:02.189 user 1m49.412s 00:10:02.189 sys 0m8.895s 00:10:02.189 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.189 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.189 ************************************ 00:10:02.189 END TEST nvmf_fio_target 00:10:02.189 ************************************ 00:10:02.189 15:42:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:02.189 15:42:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:02.189 15:42:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.189 15:42:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:02.189 ************************************ 00:10:02.189 START TEST nvmf_bdevio 00:10:02.189 ************************************ 00:10:02.189 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:02.189 * Looking for test storage... 00:10:02.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:02.189 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:02.189 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:02.189 15:42:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:02.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.189 --rc genhtml_branch_coverage=1 00:10:02.189 --rc genhtml_function_coverage=1 00:10:02.189 --rc genhtml_legend=1 00:10:02.189 --rc geninfo_all_blocks=1 00:10:02.189 --rc geninfo_unexecuted_blocks=1 00:10:02.189 00:10:02.189 ' 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:02.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.189 --rc genhtml_branch_coverage=1 00:10:02.189 --rc genhtml_function_coverage=1 00:10:02.189 --rc genhtml_legend=1 00:10:02.189 --rc geninfo_all_blocks=1 00:10:02.189 --rc geninfo_unexecuted_blocks=1 00:10:02.189 00:10:02.189 ' 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:02.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.189 --rc genhtml_branch_coverage=1 00:10:02.189 --rc genhtml_function_coverage=1 00:10:02.189 --rc genhtml_legend=1 00:10:02.189 --rc geninfo_all_blocks=1 00:10:02.189 --rc geninfo_unexecuted_blocks=1 00:10:02.189 00:10:02.189 ' 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:02.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.189 --rc genhtml_branch_coverage=1 00:10:02.189 --rc genhtml_function_coverage=1 00:10:02.189 --rc genhtml_legend=1 00:10:02.189 --rc geninfo_all_blocks=1 00:10:02.189 --rc geninfo_unexecuted_blocks=1 00:10:02.189 00:10:02.189 ' 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.189 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:02.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:02.190 15:42:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:08.761 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:08.761 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:08.761 Found net devices under 0000:af:00.0: cvl_0_0 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:08.761 Found net devices under 0000:af:00.1: cvl_0_1 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.761 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:08.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:10:08.762 00:10:08.762 --- 10.0.0.2 ping statistics --- 00:10:08.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.762 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:10:08.762 00:10:08.762 --- 10.0.0.1 ping statistics --- 00:10:08.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.762 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:08.762 15:43:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1893687 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1893687 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1893687 ']' 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:08.762 [2024-12-09 15:43:03.072890] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:10:08.762 [2024-12-09 15:43:03.072942] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.762 [2024-12-09 15:43:03.152839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.762 [2024-12-09 15:43:03.193484] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.762 [2024-12-09 15:43:03.193521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.762 [2024-12-09 15:43:03.193529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.762 [2024-12-09 15:43:03.193535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.762 [2024-12-09 15:43:03.193540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.762 [2024-12-09 15:43:03.194965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:08.762 [2024-12-09 15:43:03.195076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:08.762 [2024-12-09 15:43:03.195184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.762 [2024-12-09 15:43:03.195185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:08.762 [2024-12-09 15:43:03.344146] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:08.762 Malloc0 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:08.762 [2024-12-09 15:43:03.406813] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:08.762 { 00:10:08.762 "params": { 00:10:08.762 "name": "Nvme$subsystem", 00:10:08.762 "trtype": "$TEST_TRANSPORT", 00:10:08.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:08.762 "adrfam": "ipv4", 00:10:08.762 "trsvcid": "$NVMF_PORT", 00:10:08.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:08.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:08.762 "hdgst": ${hdgst:-false}, 00:10:08.762 "ddgst": ${ddgst:-false} 00:10:08.762 }, 00:10:08.762 "method": "bdev_nvme_attach_controller" 00:10:08.762 } 00:10:08.762 EOF 00:10:08.762 )") 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:08.762 15:43:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:08.762 "params": { 00:10:08.762 "name": "Nvme1", 00:10:08.762 "trtype": "tcp", 00:10:08.762 "traddr": "10.0.0.2", 00:10:08.762 "adrfam": "ipv4", 00:10:08.762 "trsvcid": "4420", 00:10:08.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:08.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:08.762 "hdgst": false, 00:10:08.762 "ddgst": false 00:10:08.762 }, 00:10:08.762 "method": "bdev_nvme_attach_controller" 00:10:08.762 }' 00:10:08.762 [2024-12-09 15:43:03.455605] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:10:08.762 [2024-12-09 15:43:03.455647] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1893865 ] 00:10:08.762 [2024-12-09 15:43:03.527866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:08.762 [2024-12-09 15:43:03.570087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.762 [2024-12-09 15:43:03.570194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.762 [2024-12-09 15:43:03.570195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.762 I/O targets: 00:10:08.762 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:08.762 00:10:08.762 00:10:08.762 CUnit - A unit testing framework for C - Version 2.1-3 00:10:08.762 http://cunit.sourceforge.net/ 00:10:08.762 00:10:08.762 00:10:08.762 Suite: bdevio tests on: Nvme1n1 00:10:08.762 Test: blockdev write read block ...passed 00:10:08.762 Test: blockdev write zeroes read block ...passed 00:10:08.762 Test: blockdev write zeroes read no split ...passed 00:10:09.020 Test: blockdev write zeroes read split ...passed 00:10:09.020 Test: blockdev write zeroes read split partial ...passed 00:10:09.020 Test: blockdev reset ...[2024-12-09 15:43:04.041423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:09.020 [2024-12-09 15:43:04.041486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10188b0 (9): Bad file descriptor 00:10:09.020 [2024-12-09 15:43:04.054868] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:09.020 passed 00:10:09.020 Test: blockdev write read 8 blocks ...passed 00:10:09.020 Test: blockdev write read size > 128k ...passed 00:10:09.020 Test: blockdev write read invalid size ...passed 00:10:09.020 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:09.020 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:09.020 Test: blockdev write read max offset ...passed 00:10:09.020 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:09.020 Test: blockdev writev readv 8 blocks ...passed 00:10:09.020 Test: blockdev writev readv 30 x 1block ...passed 00:10:09.278 Test: blockdev writev readv block ...passed 00:10:09.278 Test: blockdev writev readv size > 128k ...passed 00:10:09.278 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:09.278 Test: blockdev comparev and writev ...[2024-12-09 15:43:04.270367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:09.278 [2024-12-09 15:43:04.270397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:09.278 [2024-12-09 15:43:04.270411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:09.278 [2024-12-09 15:43:04.270419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:09.278 [2024-12-09 15:43:04.270655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:09.278 [2024-12-09 15:43:04.270666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:09.278 [2024-12-09 15:43:04.270677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:09.278 [2024-12-09 15:43:04.270684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:09.278 [2024-12-09 15:43:04.270912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:09.278 [2024-12-09 15:43:04.270926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:09.278 [2024-12-09 15:43:04.270937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:09.278 [2024-12-09 15:43:04.270944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:09.278 [2024-12-09 15:43:04.271161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:09.278 [2024-12-09 15:43:04.271171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:09.278 [2024-12-09 15:43:04.271182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:09.278 [2024-12-09 15:43:04.271189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:09.278 passed 00:10:09.278 Test: blockdev nvme passthru rw ...passed 00:10:09.278 Test: blockdev nvme passthru vendor specific ...[2024-12-09 15:43:04.354513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:09.278 [2024-12-09 15:43:04.354533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:09.278 [2024-12-09 15:43:04.354637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:09.278 [2024-12-09 15:43:04.354646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:09.278 [2024-12-09 15:43:04.354763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:09.278 [2024-12-09 15:43:04.354772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:09.278 [2024-12-09 15:43:04.354890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:09.278 [2024-12-09 15:43:04.354899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:09.278 passed 00:10:09.278 Test: blockdev nvme admin passthru ...passed 00:10:09.278 Test: blockdev copy ...passed 00:10:09.278 00:10:09.278 Run Summary: Type Total Ran Passed Failed Inactive 00:10:09.278 suites 1 1 n/a 0 0 00:10:09.278 tests 23 23 23 0 0 00:10:09.278 asserts 152 152 152 0 n/a 00:10:09.278 00:10:09.278 Elapsed time = 1.143 seconds 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.537 rmmod nvme_tcp 00:10:09.537 rmmod nvme_fabrics 00:10:09.537 rmmod nvme_keyring 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1893687 ']' 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1893687 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1893687 ']' 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1893687 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1893687 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1893687' 00:10:09.537 killing process with pid 1893687 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1893687 00:10:09.537 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1893687 00:10:09.795 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:09.795 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:09.795 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:09.795 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:09.795 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:09.796 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:09.796 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:09.796 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.796 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.796 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.796 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.796 15:43:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.329 15:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:12.329 00:10:12.329 real 0m10.072s 00:10:12.329 user 0m10.717s 00:10:12.329 sys 0m4.932s 00:10:12.329 15:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.329 15:43:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:12.329 ************************************ 00:10:12.329 END TEST nvmf_bdevio 00:10:12.329 ************************************ 00:10:12.329 15:43:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:12.329 00:10:12.329 real 4m36.084s 00:10:12.329 user 10m20.129s 00:10:12.329 sys 1m38.092s 00:10:12.329 15:43:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.329 15:43:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:12.329 ************************************ 00:10:12.329 END TEST nvmf_target_core 00:10:12.329 ************************************ 00:10:12.329 15:43:07 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:12.329 15:43:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:12.329 15:43:07 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.329 15:43:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:12.329 ************************************ 00:10:12.329 START TEST nvmf_target_extra 00:10:12.329 ************************************ 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:12.329 * Looking for test storage... 00:10:12.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:12.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.329 --rc genhtml_branch_coverage=1 00:10:12.329 --rc genhtml_function_coverage=1 00:10:12.329 --rc genhtml_legend=1 00:10:12.329 --rc geninfo_all_blocks=1 00:10:12.329 --rc geninfo_unexecuted_blocks=1 00:10:12.329 00:10:12.329 ' 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:12.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.329 --rc genhtml_branch_coverage=1 00:10:12.329 --rc genhtml_function_coverage=1 00:10:12.329 --rc genhtml_legend=1 00:10:12.329 --rc geninfo_all_blocks=1 00:10:12.329 --rc geninfo_unexecuted_blocks=1 00:10:12.329 00:10:12.329 ' 00:10:12.329 15:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:12.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.329 --rc genhtml_branch_coverage=1 00:10:12.329 --rc genhtml_function_coverage=1 00:10:12.329 --rc genhtml_legend=1 00:10:12.330 --rc geninfo_all_blocks=1 00:10:12.330 --rc geninfo_unexecuted_blocks=1 00:10:12.330 00:10:12.330 ' 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:12.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.330 --rc genhtml_branch_coverage=1 00:10:12.330 --rc genhtml_function_coverage=1 00:10:12.330 --rc genhtml_legend=1 00:10:12.330 --rc geninfo_all_blocks=1 00:10:12.330 --rc geninfo_unexecuted_blocks=1 00:10:12.330 00:10:12.330 ' 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:12.330 ************************************ 00:10:12.330 START TEST nvmf_example 00:10:12.330 ************************************ 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:12.330 * Looking for test storage... 00:10:12.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:12.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.330 --rc genhtml_branch_coverage=1 00:10:12.330 --rc genhtml_function_coverage=1 00:10:12.330 --rc genhtml_legend=1 00:10:12.330 --rc geninfo_all_blocks=1 00:10:12.330 --rc geninfo_unexecuted_blocks=1 00:10:12.330 00:10:12.330 ' 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:12.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.330 --rc genhtml_branch_coverage=1 00:10:12.330 --rc genhtml_function_coverage=1 00:10:12.330 --rc genhtml_legend=1 00:10:12.330 --rc geninfo_all_blocks=1 00:10:12.330 --rc geninfo_unexecuted_blocks=1 00:10:12.330 00:10:12.330 ' 00:10:12.330 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:12.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.330 --rc genhtml_branch_coverage=1 00:10:12.330 --rc genhtml_function_coverage=1 00:10:12.330 --rc genhtml_legend=1 00:10:12.330 --rc geninfo_all_blocks=1 00:10:12.330 --rc geninfo_unexecuted_blocks=1 00:10:12.330 00:10:12.330 ' 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:12.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.331 --rc genhtml_branch_coverage=1 00:10:12.331 --rc genhtml_function_coverage=1 00:10:12.331 --rc genhtml_legend=1 00:10:12.331 --rc geninfo_all_blocks=1 00:10:12.331 --rc geninfo_unexecuted_blocks=1 00:10:12.331 00:10:12.331 ' 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:12.331 15:43:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:18.900 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:18.900 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:18.900 Found net devices under 0000:af:00.0: cvl_0_0 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.900 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:18.901 Found net devices under 0000:af:00.1: cvl_0_1 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:18.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:10:18.901 00:10:18.901 --- 10.0.0.2 ping statistics --- 00:10:18.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.901 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:10:18.901 00:10:18.901 --- 10.0.0.1 ping statistics --- 00:10:18.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.901 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1897657 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1897657 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1897657 ']' 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.901 15:43:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.158 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.158 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:19.159 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:19.159 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:19.159 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:19.417 15:43:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:31.729 Initializing NVMe Controllers 00:10:31.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:31.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:31.729 Initialization complete. Launching workers. 00:10:31.729 ======================================================== 00:10:31.729 Latency(us) 00:10:31.729 Device Information : IOPS MiB/s Average min max 00:10:31.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18377.42 71.79 3481.92 471.10 18302.70 00:10:31.729 ======================================================== 00:10:31.729 Total : 18377.42 71.79 3481.92 471.10 18302.70 00:10:31.729 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.729 rmmod nvme_tcp 00:10:31.729 rmmod nvme_fabrics 00:10:31.729 rmmod nvme_keyring 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1897657 ']' 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1897657 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1897657 ']' 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1897657 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1897657 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1897657' 00:10:31.729 killing process with pid 1897657 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1897657 00:10:31.729 15:43:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1897657 00:10:31.729 nvmf threads initialize successfully 00:10:31.729 bdev subsystem init successfully 00:10:31.729 created a nvmf target service 00:10:31.729 create targets's poll groups done 00:10:31.729 all subsystems of target started 00:10:31.729 nvmf target is running 00:10:31.729 all subsystems of target stopped 00:10:31.729 destroy targets's poll groups done 00:10:31.729 destroyed the nvmf target service 00:10:31.729 bdev subsystem finish successfully 00:10:31.729 nvmf threads destroy successfully 00:10:31.729 15:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:31.729 15:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:31.729 15:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:31.729 15:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:31.729 15:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:31.729 15:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:31.729 15:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:31.729 15:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:31.729 15:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:31.729 15:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.729 15:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.729 15:43:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.989 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:31.989 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:31.989 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:31.989 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.989 00:10:31.989 real 0m19.905s 00:10:31.989 user 0m46.453s 00:10:31.989 sys 0m6.128s 00:10:31.989 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.989 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.989 ************************************ 00:10:31.989 END TEST nvmf_example 00:10:31.989 ************************************ 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:32.249 ************************************ 00:10:32.249 START TEST nvmf_filesystem 00:10:32.249 ************************************ 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:32.249 * Looking for test storage... 00:10:32.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:32.249 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:32.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.250 --rc genhtml_branch_coverage=1 00:10:32.250 --rc genhtml_function_coverage=1 00:10:32.250 --rc genhtml_legend=1 00:10:32.250 --rc geninfo_all_blocks=1 00:10:32.250 --rc geninfo_unexecuted_blocks=1 00:10:32.250 00:10:32.250 ' 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:32.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.250 --rc genhtml_branch_coverage=1 00:10:32.250 --rc genhtml_function_coverage=1 00:10:32.250 --rc genhtml_legend=1 00:10:32.250 --rc geninfo_all_blocks=1 00:10:32.250 --rc geninfo_unexecuted_blocks=1 00:10:32.250 00:10:32.250 ' 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:32.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.250 --rc genhtml_branch_coverage=1 00:10:32.250 --rc genhtml_function_coverage=1 00:10:32.250 --rc genhtml_legend=1 00:10:32.250 --rc geninfo_all_blocks=1 00:10:32.250 --rc geninfo_unexecuted_blocks=1 00:10:32.250 00:10:32.250 ' 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:32.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.250 --rc genhtml_branch_coverage=1 00:10:32.250 --rc genhtml_function_coverage=1 00:10:32.250 --rc genhtml_legend=1 00:10:32.250 --rc geninfo_all_blocks=1 00:10:32.250 --rc geninfo_unexecuted_blocks=1 00:10:32.250 00:10:32.250 ' 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:32.250 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:32.251 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:32.251 #define SPDK_CONFIG_H 00:10:32.251 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:32.251 #define SPDK_CONFIG_APPS 1 00:10:32.251 #define SPDK_CONFIG_ARCH native 00:10:32.251 #undef SPDK_CONFIG_ASAN 00:10:32.251 #undef SPDK_CONFIG_AVAHI 00:10:32.251 #undef SPDK_CONFIG_CET 00:10:32.251 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:32.251 #define SPDK_CONFIG_COVERAGE 1 00:10:32.251 #define SPDK_CONFIG_CROSS_PREFIX 00:10:32.251 #undef SPDK_CONFIG_CRYPTO 00:10:32.251 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:32.251 #undef SPDK_CONFIG_CUSTOMOCF 00:10:32.251 #undef SPDK_CONFIG_DAOS 00:10:32.251 #define SPDK_CONFIG_DAOS_DIR 00:10:32.251 #define SPDK_CONFIG_DEBUG 1 00:10:32.251 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:32.251 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:32.251 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:32.251 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:32.251 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:32.251 #undef SPDK_CONFIG_DPDK_UADK 00:10:32.251 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:32.251 #define SPDK_CONFIG_EXAMPLES 1 00:10:32.251 #undef SPDK_CONFIG_FC 00:10:32.251 #define SPDK_CONFIG_FC_PATH 00:10:32.251 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:32.251 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:32.251 #define SPDK_CONFIG_FSDEV 1 00:10:32.251 #undef SPDK_CONFIG_FUSE 00:10:32.251 #undef SPDK_CONFIG_FUZZER 00:10:32.251 #define SPDK_CONFIG_FUZZER_LIB 00:10:32.251 #undef SPDK_CONFIG_GOLANG 00:10:32.251 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:32.251 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:32.251 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:32.251 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:32.251 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:32.251 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:32.251 #undef SPDK_CONFIG_HAVE_LZ4 00:10:32.251 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:32.251 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:32.251 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:32.251 #define SPDK_CONFIG_IDXD 1 00:10:32.251 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:32.251 #undef SPDK_CONFIG_IPSEC_MB 00:10:32.251 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:32.251 #define SPDK_CONFIG_ISAL 1 00:10:32.251 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:32.251 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:32.251 #define SPDK_CONFIG_LIBDIR 00:10:32.251 #undef SPDK_CONFIG_LTO 00:10:32.251 #define SPDK_CONFIG_MAX_LCORES 128 00:10:32.251 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:32.251 #define SPDK_CONFIG_NVME_CUSE 1 00:10:32.251 #undef SPDK_CONFIG_OCF 00:10:32.251 #define SPDK_CONFIG_OCF_PATH 00:10:32.251 #define SPDK_CONFIG_OPENSSL_PATH 00:10:32.251 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:32.251 #define SPDK_CONFIG_PGO_DIR 00:10:32.251 #undef SPDK_CONFIG_PGO_USE 00:10:32.251 #define SPDK_CONFIG_PREFIX /usr/local 00:10:32.251 #undef SPDK_CONFIG_RAID5F 00:10:32.251 #undef SPDK_CONFIG_RBD 00:10:32.251 #define SPDK_CONFIG_RDMA 1 00:10:32.251 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:32.251 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:32.251 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:32.251 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:32.251 #define SPDK_CONFIG_SHARED 1 00:10:32.251 #undef SPDK_CONFIG_SMA 00:10:32.251 #define SPDK_CONFIG_TESTS 1 00:10:32.251 #undef SPDK_CONFIG_TSAN 00:10:32.251 #define SPDK_CONFIG_UBLK 1 00:10:32.251 #define SPDK_CONFIG_UBSAN 1 00:10:32.251 #undef SPDK_CONFIG_UNIT_TESTS 00:10:32.251 #undef SPDK_CONFIG_URING 00:10:32.251 #define SPDK_CONFIG_URING_PATH 00:10:32.251 #undef SPDK_CONFIG_URING_ZNS 00:10:32.251 #undef SPDK_CONFIG_USDT 00:10:32.251 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:32.251 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:32.251 #define SPDK_CONFIG_VFIO_USER 1 00:10:32.251 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:32.251 #define SPDK_CONFIG_VHOST 1 00:10:32.252 #define SPDK_CONFIG_VIRTIO 1 00:10:32.252 #undef SPDK_CONFIG_VTUNE 00:10:32.252 #define SPDK_CONFIG_VTUNE_DIR 00:10:32.252 #define SPDK_CONFIG_WERROR 1 00:10:32.252 #define SPDK_CONFIG_WPDK_DIR 00:10:32.252 #undef SPDK_CONFIG_XNVME 00:10:32.252 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:32.252 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:32.252 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.252 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.252 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.252 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.252 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.252 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.252 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.514 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.514 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:32.514 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:32.515 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:32.516 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1900029 ]] 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1900029 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.GVhMKE 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.GVhMKE/tests/target /tmp/spdk.GVhMKE 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=93711540224 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100837199872 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7125659648 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50408566784 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418597888 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.517 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=20144431104 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20167442432 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23011328 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50418356224 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=245760 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=10083704832 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=10083717120 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:32.518 * Looking for test storage... 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=93711540224 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9340252160 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:32.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.518 --rc genhtml_branch_coverage=1 00:10:32.518 --rc genhtml_function_coverage=1 00:10:32.518 --rc genhtml_legend=1 00:10:32.518 --rc geninfo_all_blocks=1 00:10:32.518 --rc geninfo_unexecuted_blocks=1 00:10:32.518 00:10:32.518 ' 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:32.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.518 --rc genhtml_branch_coverage=1 00:10:32.518 --rc genhtml_function_coverage=1 00:10:32.518 --rc genhtml_legend=1 00:10:32.518 --rc geninfo_all_blocks=1 00:10:32.518 --rc geninfo_unexecuted_blocks=1 00:10:32.518 00:10:32.518 ' 00:10:32.518 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:32.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.519 --rc genhtml_branch_coverage=1 00:10:32.519 --rc genhtml_function_coverage=1 00:10:32.519 --rc genhtml_legend=1 00:10:32.519 --rc geninfo_all_blocks=1 00:10:32.519 --rc geninfo_unexecuted_blocks=1 00:10:32.519 00:10:32.519 ' 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:32.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.519 --rc genhtml_branch_coverage=1 00:10:32.519 --rc genhtml_function_coverage=1 00:10:32.519 --rc genhtml_legend=1 00:10:32.519 --rc geninfo_all_blocks=1 00:10:32.519 --rc geninfo_unexecuted_blocks=1 00:10:32.519 00:10:32.519 ' 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:32.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:32.519 15:43:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:39.094 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:39.094 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:39.094 Found net devices under 0000:af:00.0: cvl_0_0 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:39.094 Found net devices under 0000:af:00.1: cvl_0_1 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:39.094 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:39.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:10:39.095 00:10:39.095 --- 10.0.0.2 ping statistics --- 00:10:39.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.095 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:10:39.095 00:10:39.095 --- 10.0.0.1 ping statistics --- 00:10:39.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.095 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:39.095 ************************************ 00:10:39.095 START TEST nvmf_filesystem_no_in_capsule 00:10:39.095 ************************************ 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1903113 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1903113 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1903113 ']' 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.095 15:43:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.095 [2024-12-09 15:43:33.812376] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:10:39.095 [2024-12-09 15:43:33.812417] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.095 [2024-12-09 15:43:33.891469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.095 [2024-12-09 15:43:33.932005] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.095 [2024-12-09 15:43:33.932042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.095 [2024-12-09 15:43:33.932048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.095 [2024-12-09 15:43:33.932055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.095 [2024-12-09 15:43:33.932059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.095 [2024-12-09 15:43:33.933609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.095 [2024-12-09 15:43:33.933720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.095 [2024-12-09 15:43:33.933828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.095 [2024-12-09 15:43:33.933829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.095 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.095 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:39.095 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.095 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.095 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.095 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.095 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:39.095 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:39.095 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.095 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.095 [2024-12-09 15:43:34.067567] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.095 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.096 Malloc1 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.096 [2024-12-09 15:43:34.219353] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:39.096 { 00:10:39.096 "name": "Malloc1", 00:10:39.096 "aliases": [ 00:10:39.096 "50f9aea6-7dee-4a86-bae0-09907f1db163" 00:10:39.096 ], 00:10:39.096 "product_name": "Malloc disk", 00:10:39.096 "block_size": 512, 00:10:39.096 "num_blocks": 1048576, 00:10:39.096 "uuid": "50f9aea6-7dee-4a86-bae0-09907f1db163", 00:10:39.096 "assigned_rate_limits": { 00:10:39.096 "rw_ios_per_sec": 0, 00:10:39.096 "rw_mbytes_per_sec": 0, 00:10:39.096 "r_mbytes_per_sec": 0, 00:10:39.096 "w_mbytes_per_sec": 0 00:10:39.096 }, 00:10:39.096 "claimed": true, 00:10:39.096 "claim_type": "exclusive_write", 00:10:39.096 "zoned": false, 00:10:39.096 "supported_io_types": { 00:10:39.096 "read": true, 00:10:39.096 "write": true, 00:10:39.096 "unmap": true, 00:10:39.096 "flush": true, 00:10:39.096 "reset": true, 00:10:39.096 "nvme_admin": false, 00:10:39.096 "nvme_io": false, 00:10:39.096 "nvme_io_md": false, 00:10:39.096 "write_zeroes": true, 00:10:39.096 "zcopy": true, 00:10:39.096 "get_zone_info": false, 00:10:39.096 "zone_management": false, 00:10:39.096 "zone_append": false, 00:10:39.096 "compare": false, 00:10:39.096 "compare_and_write": false, 00:10:39.096 "abort": true, 00:10:39.096 "seek_hole": false, 00:10:39.096 "seek_data": false, 00:10:39.096 "copy": true, 00:10:39.096 "nvme_iov_md": false 00:10:39.096 }, 00:10:39.096 "memory_domains": [ 00:10:39.096 { 00:10:39.096 "dma_device_id": "system", 00:10:39.096 "dma_device_type": 1 00:10:39.096 }, 00:10:39.096 { 00:10:39.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.096 "dma_device_type": 2 00:10:39.096 } 00:10:39.096 ], 00:10:39.096 "driver_specific": {} 00:10:39.096 } 00:10:39.096 ]' 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:39.096 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:39.354 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:39.354 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:39.354 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:39.354 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:39.354 15:43:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:40.727 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:40.727 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:40.727 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:40.727 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:40.727 15:43:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:42.626 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:42.626 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:42.626 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:42.626 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:42.627 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:42.627 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:42.627 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:42.627 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:42.627 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:42.627 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:42.627 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:42.627 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:42.627 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:42.627 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:42.627 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:42.627 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:42.627 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:42.627 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:42.885 15:43:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:43.821 15:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:43.821 15:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:43.821 15:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:43.821 15:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.821 15:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.821 ************************************ 00:10:43.821 START TEST filesystem_ext4 00:10:43.821 ************************************ 00:10:43.821 15:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:43.821 15:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:43.821 15:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:43.821 15:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:43.821 15:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:43.821 15:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:43.821 15:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:43.821 15:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:43.821 15:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:43.821 15:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:43.821 15:43:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:43.821 mke2fs 1.47.0 (5-Feb-2023) 00:10:43.821 Discarding device blocks: 0/522240 done 00:10:43.821 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:43.821 Filesystem UUID: e7add3f6-f51b-48fb-8240-491a57a441eb 00:10:43.821 Superblock backups stored on blocks: 00:10:43.821 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:43.821 00:10:43.821 Allocating group tables: 0/64 done 00:10:43.821 Writing inode tables: 0/64 done 00:10:44.079 Creating journal (8192 blocks): done 00:10:46.534 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:10:46.534 00:10:46.534 15:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:46.534 15:43:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:53.092 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:53.092 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:53.092 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:53.092 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:53.092 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:53.092 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:53.092 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1903113 00:10:53.092 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:53.092 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:53.092 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:53.092 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:53.093 00:10:53.093 real 0m8.548s 00:10:53.093 user 0m0.027s 00:10:53.093 sys 0m0.075s 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:53.093 ************************************ 00:10:53.093 END TEST filesystem_ext4 00:10:53.093 ************************************ 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.093 ************************************ 00:10:53.093 START TEST filesystem_btrfs 00:10:53.093 ************************************ 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:53.093 btrfs-progs v6.8.1 00:10:53.093 See https://btrfs.readthedocs.io for more information. 00:10:53.093 00:10:53.093 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:53.093 NOTE: several default settings have changed in version 5.15, please make sure 00:10:53.093 this does not affect your deployments: 00:10:53.093 - DUP for metadata (-m dup) 00:10:53.093 - enabled no-holes (-O no-holes) 00:10:53.093 - enabled free-space-tree (-R free-space-tree) 00:10:53.093 00:10:53.093 Label: (null) 00:10:53.093 UUID: 18a3cd2c-ebe3-40fa-823c-c3e50df30084 00:10:53.093 Node size: 16384 00:10:53.093 Sector size: 4096 (CPU page size: 4096) 00:10:53.093 Filesystem size: 510.00MiB 00:10:53.093 Block group profiles: 00:10:53.093 Data: single 8.00MiB 00:10:53.093 Metadata: DUP 32.00MiB 00:10:53.093 System: DUP 8.00MiB 00:10:53.093 SSD detected: yes 00:10:53.093 Zoned device: no 00:10:53.093 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:53.093 Checksum: crc32c 00:10:53.093 Number of devices: 1 00:10:53.093 Devices: 00:10:53.093 ID SIZE PATH 00:10:53.093 1 510.00MiB /dev/nvme0n1p1 00:10:53.093 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:53.093 15:43:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1903113 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:53.093 00:10:53.093 real 0m0.500s 00:10:53.093 user 0m0.025s 00:10:53.093 sys 0m0.113s 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:53.093 ************************************ 00:10:53.093 END TEST filesystem_btrfs 00:10:53.093 ************************************ 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.093 ************************************ 00:10:53.093 START TEST filesystem_xfs 00:10:53.093 ************************************ 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:53.093 15:43:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:53.093 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:53.093 = sectsz=512 attr=2, projid32bit=1 00:10:53.093 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:53.093 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:53.093 data = bsize=4096 blocks=130560, imaxpct=25 00:10:53.093 = sunit=0 swidth=0 blks 00:10:53.093 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:53.093 log =internal log bsize=4096 blocks=16384, version=2 00:10:53.093 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:53.093 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:54.027 Discarding blocks...Done. 00:10:54.027 15:43:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:54.027 15:43:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:56.557 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:56.557 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:56.557 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:56.557 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:56.557 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:56.557 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:56.557 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1903113 00:10:56.557 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:56.557 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:56.557 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:56.557 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:56.557 00:10:56.557 real 0m3.496s 00:10:56.557 user 0m0.025s 00:10:56.557 sys 0m0.073s 00:10:56.557 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.557 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:56.557 ************************************ 00:10:56.557 END TEST filesystem_xfs 00:10:56.557 ************************************ 00:10:56.557 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:56.816 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:56.816 15:43:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:56.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.816 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:56.816 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:56.816 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:56.816 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1903113 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1903113 ']' 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1903113 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1903113 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1903113' 00:10:57.075 killing process with pid 1903113 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1903113 00:10:57.075 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1903113 00:10:57.334 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:57.335 00:10:57.335 real 0m18.692s 00:10:57.335 user 1m13.642s 00:10:57.335 sys 0m1.391s 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.335 ************************************ 00:10:57.335 END TEST nvmf_filesystem_no_in_capsule 00:10:57.335 ************************************ 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:57.335 ************************************ 00:10:57.335 START TEST nvmf_filesystem_in_capsule 00:10:57.335 ************************************ 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1906448 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1906448 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1906448 ']' 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.335 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.594 [2024-12-09 15:43:52.582370] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:10:57.594 [2024-12-09 15:43:52.582412] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.594 [2024-12-09 15:43:52.656523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.594 [2024-12-09 15:43:52.697200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.594 [2024-12-09 15:43:52.697241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.594 [2024-12-09 15:43:52.697248] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.594 [2024-12-09 15:43:52.697253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.594 [2024-12-09 15:43:52.697259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.594 [2024-12-09 15:43:52.698783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.594 [2024-12-09 15:43:52.698894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.594 [2024-12-09 15:43:52.699003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.594 [2024-12-09 15:43:52.699004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.594 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.594 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:57.594 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:57.594 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:57.594 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.854 [2024-12-09 15:43:52.836545] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.854 Malloc1 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.854 [2024-12-09 15:43:52.990372] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.854 15:43:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.854 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.854 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:57.854 { 00:10:57.854 "name": "Malloc1", 00:10:57.854 "aliases": [ 00:10:57.854 "2c56d493-c748-452c-8f25-f1709596f6d6" 00:10:57.854 ], 00:10:57.854 "product_name": "Malloc disk", 00:10:57.854 "block_size": 512, 00:10:57.854 "num_blocks": 1048576, 00:10:57.854 "uuid": "2c56d493-c748-452c-8f25-f1709596f6d6", 00:10:57.854 "assigned_rate_limits": { 00:10:57.854 "rw_ios_per_sec": 0, 00:10:57.854 "rw_mbytes_per_sec": 0, 00:10:57.854 "r_mbytes_per_sec": 0, 00:10:57.854 "w_mbytes_per_sec": 0 00:10:57.854 }, 00:10:57.854 "claimed": true, 00:10:57.854 "claim_type": "exclusive_write", 00:10:57.854 "zoned": false, 00:10:57.854 "supported_io_types": { 00:10:57.854 "read": true, 00:10:57.854 "write": true, 00:10:57.854 "unmap": true, 00:10:57.854 "flush": true, 00:10:57.854 "reset": true, 00:10:57.854 "nvme_admin": false, 00:10:57.854 "nvme_io": false, 00:10:57.854 "nvme_io_md": false, 00:10:57.854 "write_zeroes": true, 00:10:57.854 "zcopy": true, 00:10:57.854 "get_zone_info": false, 00:10:57.854 "zone_management": false, 00:10:57.854 "zone_append": false, 00:10:57.854 "compare": false, 00:10:57.854 "compare_and_write": false, 00:10:57.854 "abort": true, 00:10:57.854 "seek_hole": false, 00:10:57.854 "seek_data": false, 00:10:57.854 "copy": true, 00:10:57.854 "nvme_iov_md": false 00:10:57.854 }, 00:10:57.854 "memory_domains": [ 00:10:57.854 { 00:10:57.854 "dma_device_id": "system", 00:10:57.854 "dma_device_type": 1 00:10:57.854 }, 00:10:57.854 { 00:10:57.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.854 "dma_device_type": 2 00:10:57.854 } 00:10:57.854 ], 00:10:57.854 "driver_specific": {} 00:10:57.854 } 00:10:57.854 ]' 00:10:57.855 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:57.855 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:57.855 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:58.113 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:58.113 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:58.113 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:58.113 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:58.113 15:43:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.047 15:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:59.047 15:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:59.047 15:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:59.047 15:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:59.047 15:43:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:01.574 15:43:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:02.947 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:02.947 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:02.947 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:02.947 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.947 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.947 ************************************ 00:11:02.947 START TEST filesystem_in_capsule_ext4 00:11:02.947 ************************************ 00:11:02.947 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:02.947 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:02.947 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:02.947 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:02.947 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:02.947 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:02.947 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:02.947 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:02.947 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:02.947 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:02.947 15:43:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:02.947 mke2fs 1.47.0 (5-Feb-2023) 00:11:02.947 Discarding device blocks: 0/522240 done 00:11:02.947 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:02.947 Filesystem UUID: d0c54580-2116-4721-9e35-fd657dab16c3 00:11:02.947 Superblock backups stored on blocks: 00:11:02.947 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:02.947 00:11:02.947 Allocating group tables: 0/64 done 00:11:02.947 Writing inode tables: 0/64 done 00:11:04.846 Creating journal (8192 blocks): done 00:11:06.294 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:11:06.294 00:11:06.294 15:44:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:06.294 15:44:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:11.553 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:11.553 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:11.553 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:11.553 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:11.553 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:11.553 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:11.811 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1906448 00:11:11.811 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:11.811 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:11.811 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:11.811 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:11.811 00:11:11.811 real 0m9.076s 00:11:11.811 user 0m0.029s 00:11:11.811 sys 0m0.074s 00:11:11.811 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.811 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:11.811 ************************************ 00:11:11.811 END TEST filesystem_in_capsule_ext4 00:11:11.811 ************************************ 00:11:11.811 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:11.811 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:11.811 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.811 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.812 ************************************ 00:11:11.812 START TEST filesystem_in_capsule_btrfs 00:11:11.812 ************************************ 00:11:11.812 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:11.812 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:11.812 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:11.812 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:11.812 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:11.812 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:11.812 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:11.812 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:11.812 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:11.812 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:11.812 15:44:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:12.069 btrfs-progs v6.8.1 00:11:12.069 See https://btrfs.readthedocs.io for more information. 00:11:12.069 00:11:12.069 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:12.069 NOTE: several default settings have changed in version 5.15, please make sure 00:11:12.069 this does not affect your deployments: 00:11:12.069 - DUP for metadata (-m dup) 00:11:12.069 - enabled no-holes (-O no-holes) 00:11:12.069 - enabled free-space-tree (-R free-space-tree) 00:11:12.069 00:11:12.069 Label: (null) 00:11:12.069 UUID: 745276ed-34af-4a53-90de-ef733ff83a0a 00:11:12.069 Node size: 16384 00:11:12.069 Sector size: 4096 (CPU page size: 4096) 00:11:12.069 Filesystem size: 510.00MiB 00:11:12.069 Block group profiles: 00:11:12.069 Data: single 8.00MiB 00:11:12.069 Metadata: DUP 32.00MiB 00:11:12.069 System: DUP 8.00MiB 00:11:12.069 SSD detected: yes 00:11:12.069 Zoned device: no 00:11:12.069 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:12.069 Checksum: crc32c 00:11:12.069 Number of devices: 1 00:11:12.069 Devices: 00:11:12.069 ID SIZE PATH 00:11:12.069 1 510.00MiB /dev/nvme0n1p1 00:11:12.069 00:11:12.069 15:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:12.069 15:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:13.002 15:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:13.002 15:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:13.002 15:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:13.002 15:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:13.002 15:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:13.002 15:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:13.002 15:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1906448 00:11:13.002 15:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:13.002 15:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:13.002 15:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:13.002 15:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:13.002 00:11:13.002 real 0m1.033s 00:11:13.002 user 0m0.025s 00:11:13.002 sys 0m0.113s 00:11:13.002 15:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.003 15:44:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:13.003 ************************************ 00:11:13.003 END TEST filesystem_in_capsule_btrfs 00:11:13.003 ************************************ 00:11:13.003 15:44:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:13.003 15:44:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:13.003 15:44:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.003 15:44:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.003 ************************************ 00:11:13.003 START TEST filesystem_in_capsule_xfs 00:11:13.003 ************************************ 00:11:13.003 15:44:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:13.003 15:44:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:13.003 15:44:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:13.003 15:44:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:13.003 15:44:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:13.003 15:44:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:13.003 15:44:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:13.003 15:44:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:13.003 15:44:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:13.003 15:44:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:13.003 15:44:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:13.003 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:13.003 = sectsz=512 attr=2, projid32bit=1 00:11:13.003 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:13.003 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:13.003 data = bsize=4096 blocks=130560, imaxpct=25 00:11:13.003 = sunit=0 swidth=0 blks 00:11:13.003 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:13.003 log =internal log bsize=4096 blocks=16384, version=2 00:11:13.003 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:13.003 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:13.937 Discarding blocks...Done. 00:11:13.937 15:44:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:13.937 15:44:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:15.835 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:15.835 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:15.835 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:15.835 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:15.835 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:15.835 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:15.835 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1906448 00:11:15.835 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:15.835 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:15.836 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:15.836 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:15.836 00:11:15.836 real 0m2.953s 00:11:15.836 user 0m0.022s 00:11:15.836 sys 0m0.074s 00:11:15.836 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.836 15:44:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:15.836 ************************************ 00:11:15.836 END TEST filesystem_in_capsule_xfs 00:11:15.836 ************************************ 00:11:15.836 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:16.093 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:16.093 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.351 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.351 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:16.351 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:16.351 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.351 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:16.351 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.351 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:16.351 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.351 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.351 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.351 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.351 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:16.351 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1906448 00:11:16.351 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1906448 ']' 00:11:16.351 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1906448 00:11:16.352 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:16.352 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.352 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1906448 00:11:16.352 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.352 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.352 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1906448' 00:11:16.352 killing process with pid 1906448 00:11:16.352 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1906448 00:11:16.352 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1906448 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:16.920 00:11:16.920 real 0m19.318s 00:11:16.920 user 1m16.111s 00:11:16.920 sys 0m1.420s 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.920 ************************************ 00:11:16.920 END TEST nvmf_filesystem_in_capsule 00:11:16.920 ************************************ 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.920 rmmod nvme_tcp 00:11:16.920 rmmod nvme_fabrics 00:11:16.920 rmmod nvme_keyring 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.920 15:44:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.825 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:18.825 00:11:18.825 real 0m46.752s 00:11:18.825 user 2m31.872s 00:11:18.825 sys 0m7.451s 00:11:18.825 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.825 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.825 ************************************ 00:11:18.825 END TEST nvmf_filesystem 00:11:18.825 ************************************ 00:11:18.825 15:44:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:18.825 15:44:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:18.825 15:44:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.825 15:44:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.085 ************************************ 00:11:19.085 START TEST nvmf_target_discovery 00:11:19.085 ************************************ 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:19.085 * Looking for test storage... 00:11:19.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:19.085 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:19.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.086 --rc genhtml_branch_coverage=1 00:11:19.086 --rc genhtml_function_coverage=1 00:11:19.086 --rc genhtml_legend=1 00:11:19.086 --rc geninfo_all_blocks=1 00:11:19.086 --rc geninfo_unexecuted_blocks=1 00:11:19.086 00:11:19.086 ' 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:19.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.086 --rc genhtml_branch_coverage=1 00:11:19.086 --rc genhtml_function_coverage=1 00:11:19.086 --rc genhtml_legend=1 00:11:19.086 --rc geninfo_all_blocks=1 00:11:19.086 --rc geninfo_unexecuted_blocks=1 00:11:19.086 00:11:19.086 ' 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:19.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.086 --rc genhtml_branch_coverage=1 00:11:19.086 --rc genhtml_function_coverage=1 00:11:19.086 --rc genhtml_legend=1 00:11:19.086 --rc geninfo_all_blocks=1 00:11:19.086 --rc geninfo_unexecuted_blocks=1 00:11:19.086 00:11:19.086 ' 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:19.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.086 --rc genhtml_branch_coverage=1 00:11:19.086 --rc genhtml_function_coverage=1 00:11:19.086 --rc genhtml_legend=1 00:11:19.086 --rc geninfo_all_blocks=1 00:11:19.086 --rc geninfo_unexecuted_blocks=1 00:11:19.086 00:11:19.086 ' 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.086 15:44:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.660 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:25.661 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:25.661 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:25.661 Found net devices under 0000:af:00.0: cvl_0_0 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:25.661 Found net devices under 0000:af:00.1: cvl_0_1 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.661 15:44:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:25.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:11:25.661 00:11:25.661 --- 10.0.0.2 ping statistics --- 00:11:25.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.661 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:11:25.661 00:11:25.661 --- 10.0.0.1 ping statistics --- 00:11:25.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.661 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1913340 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1913340 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1913340 ']' 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.661 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.662 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.662 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.662 15:44:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:25.662 [2024-12-09 15:44:20.335303] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:11:25.662 [2024-12-09 15:44:20.335347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.662 [2024-12-09 15:44:20.414485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.662 [2024-12-09 15:44:20.454132] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.662 [2024-12-09 15:44:20.454169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.662 [2024-12-09 15:44:20.454176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.662 [2024-12-09 15:44:20.454182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.662 [2024-12-09 15:44:20.454187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.662 [2024-12-09 15:44:20.455619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.662 [2024-12-09 15:44:20.455727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.662 [2024-12-09 15:44:20.455811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.662 [2024-12-09 15:44:20.455812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.229 [2024-12-09 15:44:21.223282] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.229 Null1 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.229 [2024-12-09 15:44:21.282377] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.229 Null2 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.229 Null3 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.229 Null4 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.229 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:26.230 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.230 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.230 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.230 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:26.230 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.230 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.230 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.230 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:26.230 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.230 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.230 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.230 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:26.488 00:11:26.488 Discovery Log Number of Records 6, Generation counter 6 00:11:26.488 =====Discovery Log Entry 0====== 00:11:26.488 trtype: tcp 00:11:26.488 adrfam: ipv4 00:11:26.488 subtype: current discovery subsystem 00:11:26.488 treq: not required 00:11:26.488 portid: 0 00:11:26.488 trsvcid: 4420 00:11:26.488 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:26.488 traddr: 10.0.0.2 00:11:26.488 eflags: explicit discovery connections, duplicate discovery information 00:11:26.488 sectype: none 00:11:26.488 =====Discovery Log Entry 1====== 00:11:26.488 trtype: tcp 00:11:26.488 adrfam: ipv4 00:11:26.488 subtype: nvme subsystem 00:11:26.488 treq: not required 00:11:26.488 portid: 0 00:11:26.488 trsvcid: 4420 00:11:26.488 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:26.488 traddr: 10.0.0.2 00:11:26.488 eflags: none 00:11:26.488 sectype: none 00:11:26.488 =====Discovery Log Entry 2====== 00:11:26.488 trtype: tcp 00:11:26.488 adrfam: ipv4 00:11:26.488 subtype: nvme subsystem 00:11:26.488 treq: not required 00:11:26.488 portid: 0 00:11:26.488 trsvcid: 4420 00:11:26.488 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:26.488 traddr: 10.0.0.2 00:11:26.488 eflags: none 00:11:26.488 sectype: none 00:11:26.488 =====Discovery Log Entry 3====== 00:11:26.488 trtype: tcp 00:11:26.488 adrfam: ipv4 00:11:26.488 subtype: nvme subsystem 00:11:26.488 treq: not required 00:11:26.488 portid: 0 00:11:26.488 trsvcid: 4420 00:11:26.488 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:26.488 traddr: 10.0.0.2 00:11:26.488 eflags: none 00:11:26.488 sectype: none 00:11:26.488 =====Discovery Log Entry 4====== 00:11:26.488 trtype: tcp 00:11:26.488 adrfam: ipv4 00:11:26.488 subtype: nvme subsystem 00:11:26.488 treq: not required 00:11:26.488 portid: 0 00:11:26.488 trsvcid: 4420 00:11:26.488 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:26.488 traddr: 10.0.0.2 00:11:26.488 eflags: none 00:11:26.488 sectype: none 00:11:26.488 =====Discovery Log Entry 5====== 00:11:26.488 trtype: tcp 00:11:26.488 adrfam: ipv4 00:11:26.488 subtype: discovery subsystem referral 00:11:26.488 treq: not required 00:11:26.488 portid: 0 00:11:26.488 trsvcid: 4430 00:11:26.488 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:26.488 traddr: 10.0.0.2 00:11:26.488 eflags: none 00:11:26.488 sectype: none 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:26.489 Perform nvmf subsystem discovery via RPC 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.489 [ 00:11:26.489 { 00:11:26.489 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:26.489 "subtype": "Discovery", 00:11:26.489 "listen_addresses": [ 00:11:26.489 { 00:11:26.489 "trtype": "TCP", 00:11:26.489 "adrfam": "IPv4", 00:11:26.489 "traddr": "10.0.0.2", 00:11:26.489 "trsvcid": "4420" 00:11:26.489 } 00:11:26.489 ], 00:11:26.489 "allow_any_host": true, 00:11:26.489 "hosts": [] 00:11:26.489 }, 00:11:26.489 { 00:11:26.489 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:26.489 "subtype": "NVMe", 00:11:26.489 "listen_addresses": [ 00:11:26.489 { 00:11:26.489 "trtype": "TCP", 00:11:26.489 "adrfam": "IPv4", 00:11:26.489 "traddr": "10.0.0.2", 00:11:26.489 "trsvcid": "4420" 00:11:26.489 } 00:11:26.489 ], 00:11:26.489 "allow_any_host": true, 00:11:26.489 "hosts": [], 00:11:26.489 "serial_number": "SPDK00000000000001", 00:11:26.489 "model_number": "SPDK bdev Controller", 00:11:26.489 "max_namespaces": 32, 00:11:26.489 "min_cntlid": 1, 00:11:26.489 "max_cntlid": 65519, 00:11:26.489 "namespaces": [ 00:11:26.489 { 00:11:26.489 "nsid": 1, 00:11:26.489 "bdev_name": "Null1", 00:11:26.489 "name": "Null1", 00:11:26.489 "nguid": "3FE640714F4F47A9B691C3839F113EA8", 00:11:26.489 "uuid": "3fe64071-4f4f-47a9-b691-c3839f113ea8" 00:11:26.489 } 00:11:26.489 ] 00:11:26.489 }, 00:11:26.489 { 00:11:26.489 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:26.489 "subtype": "NVMe", 00:11:26.489 "listen_addresses": [ 00:11:26.489 { 00:11:26.489 "trtype": "TCP", 00:11:26.489 "adrfam": "IPv4", 00:11:26.489 "traddr": "10.0.0.2", 00:11:26.489 "trsvcid": "4420" 00:11:26.489 } 00:11:26.489 ], 00:11:26.489 "allow_any_host": true, 00:11:26.489 "hosts": [], 00:11:26.489 "serial_number": "SPDK00000000000002", 00:11:26.489 "model_number": "SPDK bdev Controller", 00:11:26.489 "max_namespaces": 32, 00:11:26.489 "min_cntlid": 1, 00:11:26.489 "max_cntlid": 65519, 00:11:26.489 "namespaces": [ 00:11:26.489 { 00:11:26.489 "nsid": 1, 00:11:26.489 "bdev_name": "Null2", 00:11:26.489 "name": "Null2", 00:11:26.489 "nguid": "8E351400777346B19678F9739A80A10C", 00:11:26.489 "uuid": "8e351400-7773-46b1-9678-f9739a80a10c" 00:11:26.489 } 00:11:26.489 ] 00:11:26.489 }, 00:11:26.489 { 00:11:26.489 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:26.489 "subtype": "NVMe", 00:11:26.489 "listen_addresses": [ 00:11:26.489 { 00:11:26.489 "trtype": "TCP", 00:11:26.489 "adrfam": "IPv4", 00:11:26.489 "traddr": "10.0.0.2", 00:11:26.489 "trsvcid": "4420" 00:11:26.489 } 00:11:26.489 ], 00:11:26.489 "allow_any_host": true, 00:11:26.489 "hosts": [], 00:11:26.489 "serial_number": "SPDK00000000000003", 00:11:26.489 "model_number": "SPDK bdev Controller", 00:11:26.489 "max_namespaces": 32, 00:11:26.489 "min_cntlid": 1, 00:11:26.489 "max_cntlid": 65519, 00:11:26.489 "namespaces": [ 00:11:26.489 { 00:11:26.489 "nsid": 1, 00:11:26.489 "bdev_name": "Null3", 00:11:26.489 "name": "Null3", 00:11:26.489 "nguid": "793037E2FD90428EBB1671A787D3B54B", 00:11:26.489 "uuid": "793037e2-fd90-428e-bb16-71a787d3b54b" 00:11:26.489 } 00:11:26.489 ] 00:11:26.489 }, 00:11:26.489 { 00:11:26.489 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:26.489 "subtype": "NVMe", 00:11:26.489 "listen_addresses": [ 00:11:26.489 { 00:11:26.489 "trtype": "TCP", 00:11:26.489 "adrfam": "IPv4", 00:11:26.489 "traddr": "10.0.0.2", 00:11:26.489 "trsvcid": "4420" 00:11:26.489 } 00:11:26.489 ], 00:11:26.489 "allow_any_host": true, 00:11:26.489 "hosts": [], 00:11:26.489 "serial_number": "SPDK00000000000004", 00:11:26.489 "model_number": "SPDK bdev Controller", 00:11:26.489 "max_namespaces": 32, 00:11:26.489 "min_cntlid": 1, 00:11:26.489 "max_cntlid": 65519, 00:11:26.489 "namespaces": [ 00:11:26.489 { 00:11:26.489 "nsid": 1, 00:11:26.489 "bdev_name": "Null4", 00:11:26.489 "name": "Null4", 00:11:26.489 "nguid": "2EB9F887FF904EADB14074D60AD393A5", 00:11:26.489 "uuid": "2eb9f887-ff90-4ead-b140-74d60ad393a5" 00:11:26.489 } 00:11:26.489 ] 00:11:26.489 } 00:11:26.489 ] 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.489 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.748 rmmod nvme_tcp 00:11:26.748 rmmod nvme_fabrics 00:11:26.748 rmmod nvme_keyring 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1913340 ']' 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1913340 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1913340 ']' 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1913340 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1913340 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1913340' 00:11:26.748 killing process with pid 1913340 00:11:26.748 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1913340 00:11:26.749 15:44:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1913340 00:11:27.008 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:27.008 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:27.008 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:27.008 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:27.008 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:27.008 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:27.008 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:27.008 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.008 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:27.008 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.008 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.008 15:44:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.915 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:28.915 00:11:28.915 real 0m10.035s 00:11:28.915 user 0m8.370s 00:11:28.915 sys 0m4.812s 00:11:28.915 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.915 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.915 ************************************ 00:11:28.915 END TEST nvmf_target_discovery 00:11:28.915 ************************************ 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:29.175 ************************************ 00:11:29.175 START TEST nvmf_referrals 00:11:29.175 ************************************ 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:29.175 * Looking for test storage... 00:11:29.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:29.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.175 --rc genhtml_branch_coverage=1 00:11:29.175 --rc genhtml_function_coverage=1 00:11:29.175 --rc genhtml_legend=1 00:11:29.175 --rc geninfo_all_blocks=1 00:11:29.175 --rc geninfo_unexecuted_blocks=1 00:11:29.175 00:11:29.175 ' 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:29.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.175 --rc genhtml_branch_coverage=1 00:11:29.175 --rc genhtml_function_coverage=1 00:11:29.175 --rc genhtml_legend=1 00:11:29.175 --rc geninfo_all_blocks=1 00:11:29.175 --rc geninfo_unexecuted_blocks=1 00:11:29.175 00:11:29.175 ' 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:29.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.175 --rc genhtml_branch_coverage=1 00:11:29.175 --rc genhtml_function_coverage=1 00:11:29.175 --rc genhtml_legend=1 00:11:29.175 --rc geninfo_all_blocks=1 00:11:29.175 --rc geninfo_unexecuted_blocks=1 00:11:29.175 00:11:29.175 ' 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:29.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.175 --rc genhtml_branch_coverage=1 00:11:29.175 --rc genhtml_function_coverage=1 00:11:29.175 --rc genhtml_legend=1 00:11:29.175 --rc geninfo_all_blocks=1 00:11:29.175 --rc geninfo_unexecuted_blocks=1 00:11:29.175 00:11:29.175 ' 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.175 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.176 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.435 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:29.435 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:29.435 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.436 15:44:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:36.007 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:36.007 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:36.007 Found net devices under 0000:af:00.0: cvl_0_0 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:36.007 Found net devices under 0000:af:00.1: cvl_0_1 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:36.007 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:36.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:11:36.008 00:11:36.008 --- 10.0.0.2 ping statistics --- 00:11:36.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.008 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:11:36.008 00:11:36.008 --- 10.0.0.1 ping statistics --- 00:11:36.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.008 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1917089 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1917089 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1917089 ']' 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.008 [2024-12-09 15:44:30.408297] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:11:36.008 [2024-12-09 15:44:30.408339] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.008 [2024-12-09 15:44:30.486866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.008 [2024-12-09 15:44:30.527643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.008 [2024-12-09 15:44:30.527679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.008 [2024-12-09 15:44:30.527686] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.008 [2024-12-09 15:44:30.527693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.008 [2024-12-09 15:44:30.527697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.008 [2024-12-09 15:44:30.529242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.008 [2024-12-09 15:44:30.529308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.008 [2024-12-09 15:44:30.529393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.008 [2024-12-09 15:44:30.529394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.008 [2024-12-09 15:44:30.666941] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.008 [2024-12-09 15:44:30.696369] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:36.008 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:36.009 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:36.009 15:44:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:36.009 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:36.267 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:36.525 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:36.525 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:36.525 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:36.525 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:36.525 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:36.525 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:36.525 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:36.525 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:36.525 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:36.525 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:36.525 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:36.525 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:36.525 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:36.784 15:44:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:37.042 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:37.042 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:37.042 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:37.042 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:37.042 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:37.042 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:37.042 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:37.300 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:37.300 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:37.300 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:37.300 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:37.300 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:37.300 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:37.301 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:37.301 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:37.301 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.301 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.301 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.301 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:37.301 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:37.301 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.301 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:37.301 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.301 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:37.301 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:37.301 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:37.301 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:37.301 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:37.301 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:37.301 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:37.559 rmmod nvme_tcp 00:11:37.559 rmmod nvme_fabrics 00:11:37.559 rmmod nvme_keyring 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1917089 ']' 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1917089 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1917089 ']' 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1917089 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1917089 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1917089' 00:11:37.559 killing process with pid 1917089 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1917089 00:11:37.559 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1917089 00:11:37.819 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:37.819 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:37.819 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:37.819 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:37.819 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:37.819 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:37.819 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:37.819 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:37.819 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:37.819 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.819 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.819 15:44:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.358 15:44:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:40.358 00:11:40.358 real 0m10.814s 00:11:40.358 user 0m12.182s 00:11:40.358 sys 0m5.166s 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.358 ************************************ 00:11:40.358 END TEST nvmf_referrals 00:11:40.358 ************************************ 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:40.358 ************************************ 00:11:40.358 START TEST nvmf_connect_disconnect 00:11:40.358 ************************************ 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:40.358 * Looking for test storage... 00:11:40.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:40.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.358 --rc genhtml_branch_coverage=1 00:11:40.358 --rc genhtml_function_coverage=1 00:11:40.358 --rc genhtml_legend=1 00:11:40.358 --rc geninfo_all_blocks=1 00:11:40.358 --rc geninfo_unexecuted_blocks=1 00:11:40.358 00:11:40.358 ' 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:40.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.358 --rc genhtml_branch_coverage=1 00:11:40.358 --rc genhtml_function_coverage=1 00:11:40.358 --rc genhtml_legend=1 00:11:40.358 --rc geninfo_all_blocks=1 00:11:40.358 --rc geninfo_unexecuted_blocks=1 00:11:40.358 00:11:40.358 ' 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:40.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.358 --rc genhtml_branch_coverage=1 00:11:40.358 --rc genhtml_function_coverage=1 00:11:40.358 --rc genhtml_legend=1 00:11:40.358 --rc geninfo_all_blocks=1 00:11:40.358 --rc geninfo_unexecuted_blocks=1 00:11:40.358 00:11:40.358 ' 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:40.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.358 --rc genhtml_branch_coverage=1 00:11:40.358 --rc genhtml_function_coverage=1 00:11:40.358 --rc genhtml_legend=1 00:11:40.358 --rc geninfo_all_blocks=1 00:11:40.358 --rc geninfo_unexecuted_blocks=1 00:11:40.358 00:11:40.358 ' 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.358 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:40.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:40.359 15:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:46.938 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:46.939 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:46.939 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:46.939 Found net devices under 0000:af:00.0: cvl_0_0 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:46.939 Found net devices under 0000:af:00.1: cvl_0_1 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:46.939 15:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:46.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:11:46.939 00:11:46.939 --- 10.0.0.2 ping statistics --- 00:11:46.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.939 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:46.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:11:46.939 00:11:46.939 --- 10.0.0.1 ping statistics --- 00:11:46.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.939 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1921128 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1921128 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1921128 ']' 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.939 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.939 [2024-12-09 15:44:41.354820] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:11:46.939 [2024-12-09 15:44:41.354870] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.939 [2024-12-09 15:44:41.435289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.940 [2024-12-09 15:44:41.476642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.940 [2024-12-09 15:44:41.476677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.940 [2024-12-09 15:44:41.476684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.940 [2024-12-09 15:44:41.476689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.940 [2024-12-09 15:44:41.476694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.940 [2024-12-09 15:44:41.478152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.940 [2024-12-09 15:44:41.478262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.940 [2024-12-09 15:44:41.478307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.940 [2024-12-09 15:44:41.478307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.940 [2024-12-09 15:44:41.615996] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:46.940 [2024-12-09 15:44:41.683162] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:46.940 15:44:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:50.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.362 15:44:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:03.362 15:44:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:03.362 15:44:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:03.362 15:44:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:03.362 15:44:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:03.362 15:44:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:03.362 15:44:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:03.362 15:44:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:03.362 rmmod nvme_tcp 00:12:03.362 rmmod nvme_fabrics 00:12:03.362 rmmod nvme_keyring 00:12:03.362 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:03.362 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:03.362 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:03.362 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1921128 ']' 00:12:03.362 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1921128 00:12:03.362 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1921128 ']' 00:12:03.362 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1921128 00:12:03.362 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:03.362 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.362 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1921128 00:12:03.362 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.362 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.362 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1921128' 00:12:03.362 killing process with pid 1921128 00:12:03.362 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1921128 00:12:03.362 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1921128 00:12:03.363 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:03.363 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:03.363 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:03.363 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:03.363 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:03.363 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:03.363 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:03.363 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.363 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:03.363 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.363 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.363 15:44:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.270 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:05.270 00:12:05.270 real 0m25.258s 00:12:05.270 user 1m8.441s 00:12:05.270 sys 0m5.783s 00:12:05.270 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.270 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.270 ************************************ 00:12:05.270 END TEST nvmf_connect_disconnect 00:12:05.270 ************************************ 00:12:05.270 15:45:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:05.270 15:45:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:05.270 15:45:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.270 15:45:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.270 ************************************ 00:12:05.270 START TEST nvmf_multitarget 00:12:05.270 ************************************ 00:12:05.270 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:05.530 * Looking for test storage... 00:12:05.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.530 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:05.530 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:12:05.530 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:05.530 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:05.530 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:05.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.531 --rc genhtml_branch_coverage=1 00:12:05.531 --rc genhtml_function_coverage=1 00:12:05.531 --rc genhtml_legend=1 00:12:05.531 --rc geninfo_all_blocks=1 00:12:05.531 --rc geninfo_unexecuted_blocks=1 00:12:05.531 00:12:05.531 ' 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:05.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.531 --rc genhtml_branch_coverage=1 00:12:05.531 --rc genhtml_function_coverage=1 00:12:05.531 --rc genhtml_legend=1 00:12:05.531 --rc geninfo_all_blocks=1 00:12:05.531 --rc geninfo_unexecuted_blocks=1 00:12:05.531 00:12:05.531 ' 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:05.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.531 --rc genhtml_branch_coverage=1 00:12:05.531 --rc genhtml_function_coverage=1 00:12:05.531 --rc genhtml_legend=1 00:12:05.531 --rc geninfo_all_blocks=1 00:12:05.531 --rc geninfo_unexecuted_blocks=1 00:12:05.531 00:12:05.531 ' 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:05.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.531 --rc genhtml_branch_coverage=1 00:12:05.531 --rc genhtml_function_coverage=1 00:12:05.531 --rc genhtml_legend=1 00:12:05.531 --rc geninfo_all_blocks=1 00:12:05.531 --rc geninfo_unexecuted_blocks=1 00:12:05.531 00:12:05.531 ' 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.531 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.532 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.532 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.532 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.532 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:05.532 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:05.532 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.532 15:45:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:12.321 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:12.322 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:12.322 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:12.322 Found net devices under 0000:af:00.0: cvl_0_0 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:12.322 Found net devices under 0000:af:00.1: cvl_0_1 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:12.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:12:12.322 00:12:12.322 --- 10.0.0.2 ping statistics --- 00:12:12.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.322 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:12:12.322 00:12:12.322 --- 10.0.0.1 ping statistics --- 00:12:12.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.322 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1927821 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1927821 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1927821 ']' 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.322 15:45:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:12.322 [2024-12-09 15:45:06.616601] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:12:12.322 [2024-12-09 15:45:06.616644] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.322 [2024-12-09 15:45:06.698767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.322 [2024-12-09 15:45:06.739018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.322 [2024-12-09 15:45:06.739054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.322 [2024-12-09 15:45:06.739061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.322 [2024-12-09 15:45:06.739067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.322 [2024-12-09 15:45:06.739072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.323 [2024-12-09 15:45:06.740603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.323 [2024-12-09 15:45:06.740708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.323 [2024-12-09 15:45:06.740825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.323 [2024-12-09 15:45:06.740825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.323 15:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.323 15:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:12.323 15:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:12.323 15:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:12.323 15:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:12.323 15:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.323 15:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:12.323 15:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:12.323 15:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:12.581 15:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:12.581 15:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:12.581 "nvmf_tgt_1" 00:12:12.581 15:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:12.839 "nvmf_tgt_2" 00:12:12.840 15:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:12.840 15:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:12.840 15:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:12.840 15:45:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:12.840 true 00:12:12.840 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:13.098 true 00:12:13.098 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:13.098 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:13.098 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:13.098 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:13.098 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:13.098 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:13.098 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:13.098 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:13.098 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:13.098 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:13.098 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:13.098 rmmod nvme_tcp 00:12:13.098 rmmod nvme_fabrics 00:12:13.098 rmmod nvme_keyring 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1927821 ']' 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1927821 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1927821 ']' 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1927821 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1927821 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1927821' 00:12:13.357 killing process with pid 1927821 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1927821 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1927821 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.357 15:45:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:15.893 00:12:15.893 real 0m10.216s 00:12:15.893 user 0m9.819s 00:12:15.893 sys 0m4.988s 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:15.893 ************************************ 00:12:15.893 END TEST nvmf_multitarget 00:12:15.893 ************************************ 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:15.893 ************************************ 00:12:15.893 START TEST nvmf_rpc 00:12:15.893 ************************************ 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:15.893 * Looking for test storage... 00:12:15.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:15.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.893 --rc genhtml_branch_coverage=1 00:12:15.893 --rc genhtml_function_coverage=1 00:12:15.893 --rc genhtml_legend=1 00:12:15.893 --rc geninfo_all_blocks=1 00:12:15.893 --rc geninfo_unexecuted_blocks=1 00:12:15.893 00:12:15.893 ' 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:15.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.893 --rc genhtml_branch_coverage=1 00:12:15.893 --rc genhtml_function_coverage=1 00:12:15.893 --rc genhtml_legend=1 00:12:15.893 --rc geninfo_all_blocks=1 00:12:15.893 --rc geninfo_unexecuted_blocks=1 00:12:15.893 00:12:15.893 ' 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:15.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.893 --rc genhtml_branch_coverage=1 00:12:15.893 --rc genhtml_function_coverage=1 00:12:15.893 --rc genhtml_legend=1 00:12:15.893 --rc geninfo_all_blocks=1 00:12:15.893 --rc geninfo_unexecuted_blocks=1 00:12:15.893 00:12:15.893 ' 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:15.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.893 --rc genhtml_branch_coverage=1 00:12:15.893 --rc genhtml_function_coverage=1 00:12:15.893 --rc genhtml_legend=1 00:12:15.893 --rc geninfo_all_blocks=1 00:12:15.893 --rc geninfo_unexecuted_blocks=1 00:12:15.893 00:12:15.893 ' 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.893 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:15.894 15:45:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:22.464 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.464 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:22.464 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:22.465 Found net devices under 0000:af:00.0: cvl_0_0 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:22.465 Found net devices under 0000:af:00.1: cvl_0_1 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:22.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:12:22.465 00:12:22.465 --- 10.0.0.2 ping statistics --- 00:12:22.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.465 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:12:22.465 00:12:22.465 --- 10.0.0.1 ping statistics --- 00:12:22.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.465 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1931727 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1931727 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1931727 ']' 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.465 15:45:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.465 [2024-12-09 15:45:16.881123] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:12:22.465 [2024-12-09 15:45:16.881175] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.465 [2024-12-09 15:45:16.958634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.465 [2024-12-09 15:45:16.997966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.465 [2024-12-09 15:45:16.998005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.465 [2024-12-09 15:45:16.998013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.465 [2024-12-09 15:45:16.998019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.465 [2024-12-09 15:45:16.998023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.465 [2024-12-09 15:45:16.999401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.465 [2024-12-09 15:45:16.999512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.465 [2024-12-09 15:45:16.999621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.465 [2024-12-09 15:45:16.999622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.465 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.465 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:22.465 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:22.465 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:22.465 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.465 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.465 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:22.465 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.465 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.465 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.465 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:22.465 "tick_rate": 2100000000, 00:12:22.465 "poll_groups": [ 00:12:22.465 { 00:12:22.465 "name": "nvmf_tgt_poll_group_000", 00:12:22.465 "admin_qpairs": 0, 00:12:22.465 "io_qpairs": 0, 00:12:22.465 "current_admin_qpairs": 0, 00:12:22.466 "current_io_qpairs": 0, 00:12:22.466 "pending_bdev_io": 0, 00:12:22.466 "completed_nvme_io": 0, 00:12:22.466 "transports": [] 00:12:22.466 }, 00:12:22.466 { 00:12:22.466 "name": "nvmf_tgt_poll_group_001", 00:12:22.466 "admin_qpairs": 0, 00:12:22.466 "io_qpairs": 0, 00:12:22.466 "current_admin_qpairs": 0, 00:12:22.466 "current_io_qpairs": 0, 00:12:22.466 "pending_bdev_io": 0, 00:12:22.466 "completed_nvme_io": 0, 00:12:22.466 "transports": [] 00:12:22.466 }, 00:12:22.466 { 00:12:22.466 "name": "nvmf_tgt_poll_group_002", 00:12:22.466 "admin_qpairs": 0, 00:12:22.466 "io_qpairs": 0, 00:12:22.466 "current_admin_qpairs": 0, 00:12:22.466 "current_io_qpairs": 0, 00:12:22.466 "pending_bdev_io": 0, 00:12:22.466 "completed_nvme_io": 0, 00:12:22.466 "transports": [] 00:12:22.466 }, 00:12:22.466 { 00:12:22.466 "name": "nvmf_tgt_poll_group_003", 00:12:22.466 "admin_qpairs": 0, 00:12:22.466 "io_qpairs": 0, 00:12:22.466 "current_admin_qpairs": 0, 00:12:22.466 "current_io_qpairs": 0, 00:12:22.466 "pending_bdev_io": 0, 00:12:22.466 "completed_nvme_io": 0, 00:12:22.466 "transports": [] 00:12:22.466 } 00:12:22.466 ] 00:12:22.466 }' 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.466 [2024-12-09 15:45:17.253436] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:22.466 "tick_rate": 2100000000, 00:12:22.466 "poll_groups": [ 00:12:22.466 { 00:12:22.466 "name": "nvmf_tgt_poll_group_000", 00:12:22.466 "admin_qpairs": 0, 00:12:22.466 "io_qpairs": 0, 00:12:22.466 "current_admin_qpairs": 0, 00:12:22.466 "current_io_qpairs": 0, 00:12:22.466 "pending_bdev_io": 0, 00:12:22.466 "completed_nvme_io": 0, 00:12:22.466 "transports": [ 00:12:22.466 { 00:12:22.466 "trtype": "TCP" 00:12:22.466 } 00:12:22.466 ] 00:12:22.466 }, 00:12:22.466 { 00:12:22.466 "name": "nvmf_tgt_poll_group_001", 00:12:22.466 "admin_qpairs": 0, 00:12:22.466 "io_qpairs": 0, 00:12:22.466 "current_admin_qpairs": 0, 00:12:22.466 "current_io_qpairs": 0, 00:12:22.466 "pending_bdev_io": 0, 00:12:22.466 "completed_nvme_io": 0, 00:12:22.466 "transports": [ 00:12:22.466 { 00:12:22.466 "trtype": "TCP" 00:12:22.466 } 00:12:22.466 ] 00:12:22.466 }, 00:12:22.466 { 00:12:22.466 "name": "nvmf_tgt_poll_group_002", 00:12:22.466 "admin_qpairs": 0, 00:12:22.466 "io_qpairs": 0, 00:12:22.466 "current_admin_qpairs": 0, 00:12:22.466 "current_io_qpairs": 0, 00:12:22.466 "pending_bdev_io": 0, 00:12:22.466 "completed_nvme_io": 0, 00:12:22.466 "transports": [ 00:12:22.466 { 00:12:22.466 "trtype": "TCP" 00:12:22.466 } 00:12:22.466 ] 00:12:22.466 }, 00:12:22.466 { 00:12:22.466 "name": "nvmf_tgt_poll_group_003", 00:12:22.466 "admin_qpairs": 0, 00:12:22.466 "io_qpairs": 0, 00:12:22.466 "current_admin_qpairs": 0, 00:12:22.466 "current_io_qpairs": 0, 00:12:22.466 "pending_bdev_io": 0, 00:12:22.466 "completed_nvme_io": 0, 00:12:22.466 "transports": [ 00:12:22.466 { 00:12:22.466 "trtype": "TCP" 00:12:22.466 } 00:12:22.466 ] 00:12:22.466 } 00:12:22.466 ] 00:12:22.466 }' 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.466 Malloc1 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.466 [2024-12-09 15:45:17.439166] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:22.466 [2024-12-09 15:45:17.473909] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:12:22.466 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:22.466 could not add new controller: failed to write to nvme-fabrics device 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.466 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.467 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.467 15:45:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.843 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:23.843 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:23.843 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.843 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:23.843 15:45:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.744 [2024-12-09 15:45:20.850412] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:12:25.744 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:25.744 could not add new controller: failed to write to nvme-fabrics device 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.744 15:45:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.114 15:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.115 15:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:27.115 15:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.115 15:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:27.115 15:45:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:29.011 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:29.011 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:29.011 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.011 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:29.011 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.011 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:29.011 15:45:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.011 [2024-12-09 15:45:24.145820] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.011 15:45:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.382 15:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:30.382 15:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:30.382 15:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.382 15:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:30.382 15:45:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.277 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.277 [2024-12-09 15:45:27.504978] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.534 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.534 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:32.534 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.534 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.534 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.534 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:32.534 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.534 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.534 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.534 15:45:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.465 15:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.465 15:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:33.465 15:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.465 15:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:33.465 15:45:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:35.986 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:35.986 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:35.986 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.986 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.987 [2024-12-09 15:45:30.815022] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.987 15:45:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.919 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.919 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:36.919 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.919 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:36.919 15:45:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:38.815 15:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:38.815 15:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:38.815 15:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.815 15:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:38.815 15:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.815 15:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:38.815 15:45:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.815 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.815 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:38.815 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:38.815 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.815 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:38.815 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.073 [2024-12-09 15:45:34.085750] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.073 15:45:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.453 15:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.453 15:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:40.453 15:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.453 15:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:40.453 15:45:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:42.349 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:42.349 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.350 [2024-12-09 15:45:37.443746] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.350 15:45:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.722 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.722 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:43.722 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.722 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:43.722 15:45:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.620 [2024-12-09 15:45:40.781639] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.620 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.621 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.621 [2024-12-09 15:45:40.829743] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.621 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.621 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.621 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.621 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.621 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.621 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.621 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.621 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.879 [2024-12-09 15:45:40.877894] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.879 [2024-12-09 15:45:40.926059] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.879 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.880 [2024-12-09 15:45:40.974214] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.880 15:45:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:45.880 "tick_rate": 2100000000, 00:12:45.880 "poll_groups": [ 00:12:45.880 { 00:12:45.880 "name": "nvmf_tgt_poll_group_000", 00:12:45.880 "admin_qpairs": 2, 00:12:45.880 "io_qpairs": 168, 00:12:45.880 "current_admin_qpairs": 0, 00:12:45.880 "current_io_qpairs": 0, 00:12:45.880 "pending_bdev_io": 0, 00:12:45.880 "completed_nvme_io": 240, 00:12:45.880 "transports": [ 00:12:45.880 { 00:12:45.880 "trtype": "TCP" 00:12:45.880 } 00:12:45.880 ] 00:12:45.880 }, 00:12:45.880 { 00:12:45.880 "name": "nvmf_tgt_poll_group_001", 00:12:45.880 "admin_qpairs": 2, 00:12:45.880 "io_qpairs": 168, 00:12:45.880 "current_admin_qpairs": 0, 00:12:45.880 "current_io_qpairs": 0, 00:12:45.880 "pending_bdev_io": 0, 00:12:45.880 "completed_nvme_io": 273, 00:12:45.880 "transports": [ 00:12:45.880 { 00:12:45.880 "trtype": "TCP" 00:12:45.880 } 00:12:45.880 ] 00:12:45.880 }, 00:12:45.880 { 00:12:45.880 "name": "nvmf_tgt_poll_group_002", 00:12:45.880 "admin_qpairs": 1, 00:12:45.880 "io_qpairs": 168, 00:12:45.880 "current_admin_qpairs": 0, 00:12:45.880 "current_io_qpairs": 0, 00:12:45.880 "pending_bdev_io": 0, 00:12:45.880 "completed_nvme_io": 288, 00:12:45.880 "transports": [ 00:12:45.880 { 00:12:45.880 "trtype": "TCP" 00:12:45.880 } 00:12:45.880 ] 00:12:45.880 }, 00:12:45.880 { 00:12:45.880 "name": "nvmf_tgt_poll_group_003", 00:12:45.880 "admin_qpairs": 2, 00:12:45.880 "io_qpairs": 168, 00:12:45.880 "current_admin_qpairs": 0, 00:12:45.880 "current_io_qpairs": 0, 00:12:45.880 "pending_bdev_io": 0, 00:12:45.880 "completed_nvme_io": 221, 00:12:45.880 "transports": [ 00:12:45.880 { 00:12:45.880 "trtype": "TCP" 00:12:45.880 } 00:12:45.880 ] 00:12:45.880 } 00:12:45.880 ] 00:12:45.880 }' 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:45.880 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:46.139 rmmod nvme_tcp 00:12:46.139 rmmod nvme_fabrics 00:12:46.139 rmmod nvme_keyring 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1931727 ']' 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1931727 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1931727 ']' 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1931727 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1931727 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1931727' 00:12:46.139 killing process with pid 1931727 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1931727 00:12:46.139 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1931727 00:12:46.398 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:46.398 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:46.398 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:46.398 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:46.398 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:46.398 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:46.398 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:46.398 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:46.398 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:46.398 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.398 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.398 15:45:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.303 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:48.303 00:12:48.303 real 0m32.812s 00:12:48.303 user 1m39.064s 00:12:48.303 sys 0m6.481s 00:12:48.303 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.303 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.303 ************************************ 00:12:48.303 END TEST nvmf_rpc 00:12:48.304 ************************************ 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:48.570 ************************************ 00:12:48.570 START TEST nvmf_invalid 00:12:48.570 ************************************ 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:48.570 * Looking for test storage... 00:12:48.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:48.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.570 --rc genhtml_branch_coverage=1 00:12:48.570 --rc genhtml_function_coverage=1 00:12:48.570 --rc genhtml_legend=1 00:12:48.570 --rc geninfo_all_blocks=1 00:12:48.570 --rc geninfo_unexecuted_blocks=1 00:12:48.570 00:12:48.570 ' 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:48.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.570 --rc genhtml_branch_coverage=1 00:12:48.570 --rc genhtml_function_coverage=1 00:12:48.570 --rc genhtml_legend=1 00:12:48.570 --rc geninfo_all_blocks=1 00:12:48.570 --rc geninfo_unexecuted_blocks=1 00:12:48.570 00:12:48.570 ' 00:12:48.570 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:48.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.570 --rc genhtml_branch_coverage=1 00:12:48.570 --rc genhtml_function_coverage=1 00:12:48.570 --rc genhtml_legend=1 00:12:48.571 --rc geninfo_all_blocks=1 00:12:48.571 --rc geninfo_unexecuted_blocks=1 00:12:48.571 00:12:48.571 ' 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:48.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.571 --rc genhtml_branch_coverage=1 00:12:48.571 --rc genhtml_function_coverage=1 00:12:48.571 --rc genhtml_legend=1 00:12:48.571 --rc geninfo_all_blocks=1 00:12:48.571 --rc geninfo_unexecuted_blocks=1 00:12:48.571 00:12:48.571 ' 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:48.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.571 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:48.572 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:48.572 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:48.572 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.572 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.572 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.833 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:48.833 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:48.833 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:48.833 15:45:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:55.474 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:55.474 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:55.474 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:55.474 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:55.474 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:55.474 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:55.474 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:55.474 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:55.474 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:55.474 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:55.474 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:55.475 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:55.475 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:55.475 Found net devices under 0000:af:00.0: cvl_0_0 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:55.475 Found net devices under 0000:af:00.1: cvl_0_1 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:55.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:12:55.475 00:12:55.475 --- 10.0.0.2 ping statistics --- 00:12:55.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.475 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:55.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:12:55.475 00:12:55.475 --- 10.0.0.1 ping statistics --- 00:12:55.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.475 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1939475 00:12:55.475 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:55.476 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1939475 00:12:55.476 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1939475 ']' 00:12:55.476 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.476 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.476 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.476 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.476 15:45:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:55.476 [2024-12-09 15:45:49.796252] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:12:55.476 [2024-12-09 15:45:49.796295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.476 [2024-12-09 15:45:49.875852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.476 [2024-12-09 15:45:49.917215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.476 [2024-12-09 15:45:49.917254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.476 [2024-12-09 15:45:49.917261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.476 [2024-12-09 15:45:49.917267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.476 [2024-12-09 15:45:49.917275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.476 [2024-12-09 15:45:49.918726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.476 [2024-12-09 15:45:49.918832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.476 [2024-12-09 15:45:49.918940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.476 [2024-12-09 15:45:49.918942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode23851 00:12:55.476 [2024-12-09 15:45:50.233132] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:55.476 { 00:12:55.476 "nqn": "nqn.2016-06.io.spdk:cnode23851", 00:12:55.476 "tgt_name": "foobar", 00:12:55.476 "method": "nvmf_create_subsystem", 00:12:55.476 "req_id": 1 00:12:55.476 } 00:12:55.476 Got JSON-RPC error response 00:12:55.476 response: 00:12:55.476 { 00:12:55.476 "code": -32603, 00:12:55.476 "message": "Unable to find target foobar" 00:12:55.476 }' 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:55.476 { 00:12:55.476 "nqn": "nqn.2016-06.io.spdk:cnode23851", 00:12:55.476 "tgt_name": "foobar", 00:12:55.476 "method": "nvmf_create_subsystem", 00:12:55.476 "req_id": 1 00:12:55.476 } 00:12:55.476 Got JSON-RPC error response 00:12:55.476 response: 00:12:55.476 { 00:12:55.476 "code": -32603, 00:12:55.476 "message": "Unable to find target foobar" 00:12:55.476 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14205 00:12:55.476 [2024-12-09 15:45:50.421764] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14205: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:55.476 { 00:12:55.476 "nqn": "nqn.2016-06.io.spdk:cnode14205", 00:12:55.476 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:55.476 "method": "nvmf_create_subsystem", 00:12:55.476 "req_id": 1 00:12:55.476 } 00:12:55.476 Got JSON-RPC error response 00:12:55.476 response: 00:12:55.476 { 00:12:55.476 "code": -32602, 00:12:55.476 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:55.476 }' 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:55.476 { 00:12:55.476 "nqn": "nqn.2016-06.io.spdk:cnode14205", 00:12:55.476 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:55.476 "method": "nvmf_create_subsystem", 00:12:55.476 "req_id": 1 00:12:55.476 } 00:12:55.476 Got JSON-RPC error response 00:12:55.476 response: 00:12:55.476 { 00:12:55.476 "code": -32602, 00:12:55.476 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:55.476 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6513 00:12:55.476 [2024-12-09 15:45:50.622418] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6513: invalid model number 'SPDK_Controller' 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:55.476 { 00:12:55.476 "nqn": "nqn.2016-06.io.spdk:cnode6513", 00:12:55.476 "model_number": "SPDK_Controller\u001f", 00:12:55.476 "method": "nvmf_create_subsystem", 00:12:55.476 "req_id": 1 00:12:55.476 } 00:12:55.476 Got JSON-RPC error response 00:12:55.476 response: 00:12:55.476 { 00:12:55.476 "code": -32602, 00:12:55.476 "message": "Invalid MN SPDK_Controller\u001f" 00:12:55.476 }' 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:55.476 { 00:12:55.476 "nqn": "nqn.2016-06.io.spdk:cnode6513", 00:12:55.476 "model_number": "SPDK_Controller\u001f", 00:12:55.476 "method": "nvmf_create_subsystem", 00:12:55.476 "req_id": 1 00:12:55.476 } 00:12:55.476 Got JSON-RPC error response 00:12:55.476 response: 00:12:55.476 { 00:12:55.476 "code": -32602, 00:12:55.476 "message": "Invalid MN SPDK_Controller\u001f" 00:12:55.476 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:55.476 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.477 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.477 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:55.477 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:55.477 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:55.477 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.477 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.736 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.737 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ a == \- ]] 00:12:55.737 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'aZf5$4f?U1.!T5@$MiF1[' 00:12:55.737 15:45:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'aZf5$4f?U1.!T5@$MiF1[' nqn.2016-06.io.spdk:cnode14245 00:12:55.996 [2024-12-09 15:45:50.983663] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14245: invalid serial number 'aZf5$4f?U1.!T5@$MiF1[' 00:12:55.996 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:55.996 { 00:12:55.996 "nqn": "nqn.2016-06.io.spdk:cnode14245", 00:12:55.997 "serial_number": "aZf5$4f?U1.!T5@$MiF1[", 00:12:55.997 "method": "nvmf_create_subsystem", 00:12:55.997 "req_id": 1 00:12:55.997 } 00:12:55.997 Got JSON-RPC error response 00:12:55.997 response: 00:12:55.997 { 00:12:55.997 "code": -32602, 00:12:55.997 "message": "Invalid SN aZf5$4f?U1.!T5@$MiF1[" 00:12:55.997 }' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:55.997 { 00:12:55.997 "nqn": "nqn.2016-06.io.spdk:cnode14245", 00:12:55.997 "serial_number": "aZf5$4f?U1.!T5@$MiF1[", 00:12:55.997 "method": "nvmf_create_subsystem", 00:12:55.997 "req_id": 1 00:12:55.997 } 00:12:55.997 Got JSON-RPC error response 00:12:55.997 response: 00:12:55.997 { 00:12:55.997 "code": -32602, 00:12:55.997 "message": "Invalid SN aZf5$4f?U1.!T5@$MiF1[" 00:12:55.997 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:55.997 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.998 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ # == \- ]] 00:12:56.257 15:45:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '#+SEfbv s8EN<;PcD_O$;}Io*eMJ.?T?URK2"4p /dev/null' 00:12:58.329 15:45:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:00.866 00:13:00.866 real 0m11.983s 00:13:00.866 user 0m18.506s 00:13:00.866 sys 0m5.385s 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:00.866 ************************************ 00:13:00.866 END TEST nvmf_invalid 00:13:00.866 ************************************ 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:00.866 ************************************ 00:13:00.866 START TEST nvmf_connect_stress 00:13:00.866 ************************************ 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:00.866 * Looking for test storage... 00:13:00.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:00.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.866 --rc genhtml_branch_coverage=1 00:13:00.866 --rc genhtml_function_coverage=1 00:13:00.866 --rc genhtml_legend=1 00:13:00.866 --rc geninfo_all_blocks=1 00:13:00.866 --rc geninfo_unexecuted_blocks=1 00:13:00.866 00:13:00.866 ' 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:00.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.866 --rc genhtml_branch_coverage=1 00:13:00.866 --rc genhtml_function_coverage=1 00:13:00.866 --rc genhtml_legend=1 00:13:00.866 --rc geninfo_all_blocks=1 00:13:00.866 --rc geninfo_unexecuted_blocks=1 00:13:00.866 00:13:00.866 ' 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:00.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.866 --rc genhtml_branch_coverage=1 00:13:00.866 --rc genhtml_function_coverage=1 00:13:00.866 --rc genhtml_legend=1 00:13:00.866 --rc geninfo_all_blocks=1 00:13:00.866 --rc geninfo_unexecuted_blocks=1 00:13:00.866 00:13:00.866 ' 00:13:00.866 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:00.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.866 --rc genhtml_branch_coverage=1 00:13:00.866 --rc genhtml_function_coverage=1 00:13:00.866 --rc genhtml_legend=1 00:13:00.866 --rc geninfo_all_blocks=1 00:13:00.866 --rc geninfo_unexecuted_blocks=1 00:13:00.866 00:13:00.866 ' 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:00.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:00.867 15:45:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.438 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:07.439 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:07.439 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:07.439 Found net devices under 0000:af:00.0: cvl_0_0 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:07.439 Found net devices under 0000:af:00.1: cvl_0_1 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:07.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:13:07.439 00:13:07.439 --- 10.0.0.2 ping statistics --- 00:13:07.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.439 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:07.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:13:07.439 00:13:07.439 --- 10.0.0.1 ping statistics --- 00:13:07.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.439 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:13:07.439 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1943611 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1943611 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1943611 ']' 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.440 15:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.440 [2024-12-09 15:46:01.862142] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:13:07.440 [2024-12-09 15:46:01.862185] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.440 [2024-12-09 15:46:01.945593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:07.440 [2024-12-09 15:46:01.988096] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.440 [2024-12-09 15:46:01.988133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.440 [2024-12-09 15:46:01.988140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.440 [2024-12-09 15:46:01.988146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.440 [2024-12-09 15:46:01.988152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.440 [2024-12-09 15:46:01.989525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.440 [2024-12-09 15:46:01.989543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.440 [2024-12-09 15:46:01.989547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.440 [2024-12-09 15:46:02.138300] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.440 [2024-12-09 15:46:02.158511] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.440 NULL1 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1943804 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.440 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.441 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.441 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:07.441 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.441 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.441 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.699 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.699 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:07.699 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.699 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.699 15:46:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.264 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.264 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:08.264 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.264 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.264 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.521 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.521 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:08.521 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.521 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.521 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.779 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.779 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:08.779 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.779 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.779 15:46:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.036 15:46:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.036 15:46:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:09.036 15:46:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.036 15:46:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.036 15:46:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.602 15:46:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.602 15:46:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:09.602 15:46:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.602 15:46:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.602 15:46:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.859 15:46:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.859 15:46:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:09.859 15:46:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.859 15:46:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.859 15:46:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.116 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.116 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:10.116 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.116 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.116 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.374 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.374 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:10.374 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.374 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.374 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.632 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.632 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:10.632 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.632 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.632 15:46:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.197 15:46:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.197 15:46:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:11.197 15:46:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.197 15:46:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.197 15:46:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.455 15:46:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.455 15:46:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:11.455 15:46:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.455 15:46:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.455 15:46:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.712 15:46:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.712 15:46:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:11.712 15:46:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.712 15:46:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.712 15:46:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.969 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.969 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:11.969 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.969 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.969 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.535 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.535 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:12.535 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.535 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.535 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.793 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.793 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:12.793 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.793 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.793 15:46:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.050 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.050 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:13.050 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.050 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.050 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.308 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.308 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:13.308 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.308 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.308 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.566 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.566 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:13.566 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.566 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.566 15:46:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.132 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.132 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:14.132 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.132 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.132 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.390 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.390 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:14.390 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.390 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.390 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.648 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.648 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:14.648 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.648 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.648 15:46:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.905 15:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.905 15:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:14.905 15:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.905 15:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.905 15:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.471 15:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.471 15:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:15.471 15:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.471 15:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.471 15:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.729 15:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.729 15:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:15.729 15:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.729 15:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.729 15:46:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.987 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.987 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:15.987 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.987 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.987 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.245 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.245 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:16.245 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.245 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.245 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.502 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.502 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:16.502 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.502 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.502 15:46:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.068 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.068 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:17.068 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.068 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.068 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.326 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1943804 00:13:17.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1943804) - No such process 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1943804 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:17.327 rmmod nvme_tcp 00:13:17.327 rmmod nvme_fabrics 00:13:17.327 rmmod nvme_keyring 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1943611 ']' 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1943611 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1943611 ']' 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1943611 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1943611 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1943611' 00:13:17.327 killing process with pid 1943611 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1943611 00:13:17.327 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1943611 00:13:17.586 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:17.586 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:17.586 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:17.586 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:17.586 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:17.586 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:17.586 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:17.586 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:17.586 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:17.586 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.586 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.586 15:46:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.491 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:19.491 00:13:19.491 real 0m19.061s 00:13:19.491 user 0m39.538s 00:13:19.491 sys 0m8.490s 00:13:19.491 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.491 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.491 ************************************ 00:13:19.491 END TEST nvmf_connect_stress 00:13:19.491 ************************************ 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:19.750 ************************************ 00:13:19.750 START TEST nvmf_fused_ordering 00:13:19.750 ************************************ 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:19.750 * Looking for test storage... 00:13:19.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:19.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.750 --rc genhtml_branch_coverage=1 00:13:19.750 --rc genhtml_function_coverage=1 00:13:19.750 --rc genhtml_legend=1 00:13:19.750 --rc geninfo_all_blocks=1 00:13:19.750 --rc geninfo_unexecuted_blocks=1 00:13:19.750 00:13:19.750 ' 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:19.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.750 --rc genhtml_branch_coverage=1 00:13:19.750 --rc genhtml_function_coverage=1 00:13:19.750 --rc genhtml_legend=1 00:13:19.750 --rc geninfo_all_blocks=1 00:13:19.750 --rc geninfo_unexecuted_blocks=1 00:13:19.750 00:13:19.750 ' 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:19.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.750 --rc genhtml_branch_coverage=1 00:13:19.750 --rc genhtml_function_coverage=1 00:13:19.750 --rc genhtml_legend=1 00:13:19.750 --rc geninfo_all_blocks=1 00:13:19.750 --rc geninfo_unexecuted_blocks=1 00:13:19.750 00:13:19.750 ' 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:19.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.750 --rc genhtml_branch_coverage=1 00:13:19.750 --rc genhtml_function_coverage=1 00:13:19.750 --rc genhtml_legend=1 00:13:19.750 --rc geninfo_all_blocks=1 00:13:19.750 --rc geninfo_unexecuted_blocks=1 00:13:19.750 00:13:19.750 ' 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.750 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:20.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:20.010 15:46:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:26.575 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:26.575 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:26.575 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:26.576 Found net devices under 0000:af:00.0: cvl_0_0 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:26.576 Found net devices under 0000:af:00.1: cvl_0_1 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:26.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:13:26.576 00:13:26.576 --- 10.0.0.2 ping statistics --- 00:13:26.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.576 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:13:26.576 00:13:26.576 --- 10.0.0.1 ping statistics --- 00:13:26.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.576 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1948945 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1948945 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1948945 ']' 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.576 15:46:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.576 [2024-12-09 15:46:20.956415] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:13:26.576 [2024-12-09 15:46:20.956457] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.576 [2024-12-09 15:46:21.020721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.576 [2024-12-09 15:46:21.060358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.576 [2024-12-09 15:46:21.060393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.576 [2024-12-09 15:46:21.060404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.576 [2024-12-09 15:46:21.060410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.576 [2024-12-09 15:46:21.060415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.576 [2024-12-09 15:46:21.060903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.576 [2024-12-09 15:46:21.204710] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.576 [2024-12-09 15:46:21.224896] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:26.576 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.577 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.577 NULL1 00:13:26.577 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.577 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:26.577 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.577 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.577 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.577 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:26.577 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.577 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.577 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.577 15:46:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:26.577 [2024-12-09 15:46:21.284990] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:13:26.577 [2024-12-09 15:46:21.285036] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1948974 ] 00:13:26.577 Attached to nqn.2016-06.io.spdk:cnode1 00:13:26.577 Namespace ID: 1 size: 1GB 00:13:26.577 fused_ordering(0) 00:13:26.577 fused_ordering(1) 00:13:26.577 fused_ordering(2) 00:13:26.577 fused_ordering(3) 00:13:26.577 fused_ordering(4) 00:13:26.577 fused_ordering(5) 00:13:26.577 fused_ordering(6) 00:13:26.577 fused_ordering(7) 00:13:26.577 fused_ordering(8) 00:13:26.577 fused_ordering(9) 00:13:26.577 fused_ordering(10) 00:13:26.577 fused_ordering(11) 00:13:26.577 fused_ordering(12) 00:13:26.577 fused_ordering(13) 00:13:26.577 fused_ordering(14) 00:13:26.577 fused_ordering(15) 00:13:26.577 fused_ordering(16) 00:13:26.577 fused_ordering(17) 00:13:26.577 fused_ordering(18) 00:13:26.577 fused_ordering(19) 00:13:26.577 fused_ordering(20) 00:13:26.577 fused_ordering(21) 00:13:26.577 fused_ordering(22) 00:13:26.577 fused_ordering(23) 00:13:26.577 fused_ordering(24) 00:13:26.577 fused_ordering(25) 00:13:26.577 fused_ordering(26) 00:13:26.577 fused_ordering(27) 00:13:26.577 fused_ordering(28) 00:13:26.577 fused_ordering(29) 00:13:26.577 fused_ordering(30) 00:13:26.577 fused_ordering(31) 00:13:26.577 fused_ordering(32) 00:13:26.577 fused_ordering(33) 00:13:26.577 fused_ordering(34) 00:13:26.577 fused_ordering(35) 00:13:26.577 fused_ordering(36) 00:13:26.577 fused_ordering(37) 00:13:26.577 fused_ordering(38) 00:13:26.577 fused_ordering(39) 00:13:26.577 fused_ordering(40) 00:13:26.577 fused_ordering(41) 00:13:26.577 fused_ordering(42) 00:13:26.577 fused_ordering(43) 00:13:26.577 fused_ordering(44) 00:13:26.577 fused_ordering(45) 00:13:26.577 fused_ordering(46) 00:13:26.577 fused_ordering(47) 00:13:26.577 fused_ordering(48) 00:13:26.577 fused_ordering(49) 00:13:26.577 fused_ordering(50) 00:13:26.577 fused_ordering(51) 00:13:26.577 fused_ordering(52) 00:13:26.577 fused_ordering(53) 00:13:26.577 fused_ordering(54) 00:13:26.577 fused_ordering(55) 00:13:26.577 fused_ordering(56) 00:13:26.577 fused_ordering(57) 00:13:26.577 fused_ordering(58) 00:13:26.577 fused_ordering(59) 00:13:26.577 fused_ordering(60) 00:13:26.577 fused_ordering(61) 00:13:26.577 fused_ordering(62) 00:13:26.577 fused_ordering(63) 00:13:26.577 fused_ordering(64) 00:13:26.577 fused_ordering(65) 00:13:26.577 fused_ordering(66) 00:13:26.577 fused_ordering(67) 00:13:26.577 fused_ordering(68) 00:13:26.577 fused_ordering(69) 00:13:26.577 fused_ordering(70) 00:13:26.577 fused_ordering(71) 00:13:26.577 fused_ordering(72) 00:13:26.577 fused_ordering(73) 00:13:26.577 fused_ordering(74) 00:13:26.577 fused_ordering(75) 00:13:26.577 fused_ordering(76) 00:13:26.577 fused_ordering(77) 00:13:26.577 fused_ordering(78) 00:13:26.577 fused_ordering(79) 00:13:26.577 fused_ordering(80) 00:13:26.577 fused_ordering(81) 00:13:26.577 fused_ordering(82) 00:13:26.577 fused_ordering(83) 00:13:26.577 fused_ordering(84) 00:13:26.577 fused_ordering(85) 00:13:26.577 fused_ordering(86) 00:13:26.577 fused_ordering(87) 00:13:26.577 fused_ordering(88) 00:13:26.577 fused_ordering(89) 00:13:26.577 fused_ordering(90) 00:13:26.577 fused_ordering(91) 00:13:26.577 fused_ordering(92) 00:13:26.577 fused_ordering(93) 00:13:26.577 fused_ordering(94) 00:13:26.577 fused_ordering(95) 00:13:26.577 fused_ordering(96) 00:13:26.577 fused_ordering(97) 00:13:26.577 fused_ordering(98) 00:13:26.577 fused_ordering(99) 00:13:26.577 fused_ordering(100) 00:13:26.577 fused_ordering(101) 00:13:26.577 fused_ordering(102) 00:13:26.577 fused_ordering(103) 00:13:26.577 fused_ordering(104) 00:13:26.577 fused_ordering(105) 00:13:26.577 fused_ordering(106) 00:13:26.577 fused_ordering(107) 00:13:26.577 fused_ordering(108) 00:13:26.577 fused_ordering(109) 00:13:26.577 fused_ordering(110) 00:13:26.577 fused_ordering(111) 00:13:26.577 fused_ordering(112) 00:13:26.577 fused_ordering(113) 00:13:26.577 fused_ordering(114) 00:13:26.577 fused_ordering(115) 00:13:26.577 fused_ordering(116) 00:13:26.577 fused_ordering(117) 00:13:26.577 fused_ordering(118) 00:13:26.577 fused_ordering(119) 00:13:26.577 fused_ordering(120) 00:13:26.577 fused_ordering(121) 00:13:26.577 fused_ordering(122) 00:13:26.577 fused_ordering(123) 00:13:26.577 fused_ordering(124) 00:13:26.577 fused_ordering(125) 00:13:26.577 fused_ordering(126) 00:13:26.577 fused_ordering(127) 00:13:26.577 fused_ordering(128) 00:13:26.577 fused_ordering(129) 00:13:26.577 fused_ordering(130) 00:13:26.577 fused_ordering(131) 00:13:26.577 fused_ordering(132) 00:13:26.577 fused_ordering(133) 00:13:26.577 fused_ordering(134) 00:13:26.577 fused_ordering(135) 00:13:26.577 fused_ordering(136) 00:13:26.577 fused_ordering(137) 00:13:26.577 fused_ordering(138) 00:13:26.577 fused_ordering(139) 00:13:26.577 fused_ordering(140) 00:13:26.577 fused_ordering(141) 00:13:26.577 fused_ordering(142) 00:13:26.577 fused_ordering(143) 00:13:26.577 fused_ordering(144) 00:13:26.577 fused_ordering(145) 00:13:26.577 fused_ordering(146) 00:13:26.577 fused_ordering(147) 00:13:26.577 fused_ordering(148) 00:13:26.577 fused_ordering(149) 00:13:26.577 fused_ordering(150) 00:13:26.577 fused_ordering(151) 00:13:26.577 fused_ordering(152) 00:13:26.577 fused_ordering(153) 00:13:26.577 fused_ordering(154) 00:13:26.577 fused_ordering(155) 00:13:26.577 fused_ordering(156) 00:13:26.577 fused_ordering(157) 00:13:26.577 fused_ordering(158) 00:13:26.577 fused_ordering(159) 00:13:26.577 fused_ordering(160) 00:13:26.577 fused_ordering(161) 00:13:26.577 fused_ordering(162) 00:13:26.577 fused_ordering(163) 00:13:26.577 fused_ordering(164) 00:13:26.577 fused_ordering(165) 00:13:26.577 fused_ordering(166) 00:13:26.577 fused_ordering(167) 00:13:26.577 fused_ordering(168) 00:13:26.577 fused_ordering(169) 00:13:26.577 fused_ordering(170) 00:13:26.577 fused_ordering(171) 00:13:26.577 fused_ordering(172) 00:13:26.577 fused_ordering(173) 00:13:26.577 fused_ordering(174) 00:13:26.577 fused_ordering(175) 00:13:26.577 fused_ordering(176) 00:13:26.577 fused_ordering(177) 00:13:26.577 fused_ordering(178) 00:13:26.577 fused_ordering(179) 00:13:26.577 fused_ordering(180) 00:13:26.577 fused_ordering(181) 00:13:26.577 fused_ordering(182) 00:13:26.577 fused_ordering(183) 00:13:26.577 fused_ordering(184) 00:13:26.577 fused_ordering(185) 00:13:26.577 fused_ordering(186) 00:13:26.577 fused_ordering(187) 00:13:26.577 fused_ordering(188) 00:13:26.577 fused_ordering(189) 00:13:26.577 fused_ordering(190) 00:13:26.577 fused_ordering(191) 00:13:26.577 fused_ordering(192) 00:13:26.577 fused_ordering(193) 00:13:26.577 fused_ordering(194) 00:13:26.577 fused_ordering(195) 00:13:26.577 fused_ordering(196) 00:13:26.577 fused_ordering(197) 00:13:26.577 fused_ordering(198) 00:13:26.577 fused_ordering(199) 00:13:26.577 fused_ordering(200) 00:13:26.577 fused_ordering(201) 00:13:26.577 fused_ordering(202) 00:13:26.577 fused_ordering(203) 00:13:26.577 fused_ordering(204) 00:13:26.577 fused_ordering(205) 00:13:26.836 fused_ordering(206) 00:13:26.836 fused_ordering(207) 00:13:26.836 fused_ordering(208) 00:13:26.836 fused_ordering(209) 00:13:26.836 fused_ordering(210) 00:13:26.836 fused_ordering(211) 00:13:26.836 fused_ordering(212) 00:13:26.836 fused_ordering(213) 00:13:26.836 fused_ordering(214) 00:13:26.836 fused_ordering(215) 00:13:26.836 fused_ordering(216) 00:13:26.836 fused_ordering(217) 00:13:26.836 fused_ordering(218) 00:13:26.836 fused_ordering(219) 00:13:26.836 fused_ordering(220) 00:13:26.836 fused_ordering(221) 00:13:26.836 fused_ordering(222) 00:13:26.836 fused_ordering(223) 00:13:26.836 fused_ordering(224) 00:13:26.836 fused_ordering(225) 00:13:26.836 fused_ordering(226) 00:13:26.836 fused_ordering(227) 00:13:26.836 fused_ordering(228) 00:13:26.836 fused_ordering(229) 00:13:26.836 fused_ordering(230) 00:13:26.836 fused_ordering(231) 00:13:26.836 fused_ordering(232) 00:13:26.836 fused_ordering(233) 00:13:26.836 fused_ordering(234) 00:13:26.836 fused_ordering(235) 00:13:26.836 fused_ordering(236) 00:13:26.836 fused_ordering(237) 00:13:26.836 fused_ordering(238) 00:13:26.836 fused_ordering(239) 00:13:26.836 fused_ordering(240) 00:13:26.836 fused_ordering(241) 00:13:26.836 fused_ordering(242) 00:13:26.836 fused_ordering(243) 00:13:26.836 fused_ordering(244) 00:13:26.836 fused_ordering(245) 00:13:26.836 fused_ordering(246) 00:13:26.836 fused_ordering(247) 00:13:26.836 fused_ordering(248) 00:13:26.836 fused_ordering(249) 00:13:26.836 fused_ordering(250) 00:13:26.836 fused_ordering(251) 00:13:26.836 fused_ordering(252) 00:13:26.836 fused_ordering(253) 00:13:26.836 fused_ordering(254) 00:13:26.836 fused_ordering(255) 00:13:26.836 fused_ordering(256) 00:13:26.836 fused_ordering(257) 00:13:26.836 fused_ordering(258) 00:13:26.836 fused_ordering(259) 00:13:26.836 fused_ordering(260) 00:13:26.836 fused_ordering(261) 00:13:26.836 fused_ordering(262) 00:13:26.836 fused_ordering(263) 00:13:26.836 fused_ordering(264) 00:13:26.836 fused_ordering(265) 00:13:26.836 fused_ordering(266) 00:13:26.836 fused_ordering(267) 00:13:26.836 fused_ordering(268) 00:13:26.836 fused_ordering(269) 00:13:26.836 fused_ordering(270) 00:13:26.836 fused_ordering(271) 00:13:26.836 fused_ordering(272) 00:13:26.836 fused_ordering(273) 00:13:26.836 fused_ordering(274) 00:13:26.836 fused_ordering(275) 00:13:26.836 fused_ordering(276) 00:13:26.836 fused_ordering(277) 00:13:26.836 fused_ordering(278) 00:13:26.836 fused_ordering(279) 00:13:26.836 fused_ordering(280) 00:13:26.836 fused_ordering(281) 00:13:26.836 fused_ordering(282) 00:13:26.836 fused_ordering(283) 00:13:26.836 fused_ordering(284) 00:13:26.836 fused_ordering(285) 00:13:26.836 fused_ordering(286) 00:13:26.836 fused_ordering(287) 00:13:26.836 fused_ordering(288) 00:13:26.836 fused_ordering(289) 00:13:26.836 fused_ordering(290) 00:13:26.836 fused_ordering(291) 00:13:26.836 fused_ordering(292) 00:13:26.836 fused_ordering(293) 00:13:26.836 fused_ordering(294) 00:13:26.836 fused_ordering(295) 00:13:26.836 fused_ordering(296) 00:13:26.836 fused_ordering(297) 00:13:26.836 fused_ordering(298) 00:13:26.836 fused_ordering(299) 00:13:26.836 fused_ordering(300) 00:13:26.836 fused_ordering(301) 00:13:26.836 fused_ordering(302) 00:13:26.836 fused_ordering(303) 00:13:26.836 fused_ordering(304) 00:13:26.836 fused_ordering(305) 00:13:26.836 fused_ordering(306) 00:13:26.836 fused_ordering(307) 00:13:26.836 fused_ordering(308) 00:13:26.836 fused_ordering(309) 00:13:26.836 fused_ordering(310) 00:13:26.836 fused_ordering(311) 00:13:26.836 fused_ordering(312) 00:13:26.836 fused_ordering(313) 00:13:26.836 fused_ordering(314) 00:13:26.836 fused_ordering(315) 00:13:26.836 fused_ordering(316) 00:13:26.836 fused_ordering(317) 00:13:26.836 fused_ordering(318) 00:13:26.836 fused_ordering(319) 00:13:26.836 fused_ordering(320) 00:13:26.836 fused_ordering(321) 00:13:26.836 fused_ordering(322) 00:13:26.836 fused_ordering(323) 00:13:26.836 fused_ordering(324) 00:13:26.836 fused_ordering(325) 00:13:26.836 fused_ordering(326) 00:13:26.836 fused_ordering(327) 00:13:26.836 fused_ordering(328) 00:13:26.836 fused_ordering(329) 00:13:26.836 fused_ordering(330) 00:13:26.836 fused_ordering(331) 00:13:26.836 fused_ordering(332) 00:13:26.836 fused_ordering(333) 00:13:26.836 fused_ordering(334) 00:13:26.836 fused_ordering(335) 00:13:26.836 fused_ordering(336) 00:13:26.836 fused_ordering(337) 00:13:26.836 fused_ordering(338) 00:13:26.836 fused_ordering(339) 00:13:26.836 fused_ordering(340) 00:13:26.836 fused_ordering(341) 00:13:26.836 fused_ordering(342) 00:13:26.836 fused_ordering(343) 00:13:26.836 fused_ordering(344) 00:13:26.836 fused_ordering(345) 00:13:26.836 fused_ordering(346) 00:13:26.836 fused_ordering(347) 00:13:26.836 fused_ordering(348) 00:13:26.836 fused_ordering(349) 00:13:26.836 fused_ordering(350) 00:13:26.836 fused_ordering(351) 00:13:26.836 fused_ordering(352) 00:13:26.836 fused_ordering(353) 00:13:26.836 fused_ordering(354) 00:13:26.836 fused_ordering(355) 00:13:26.836 fused_ordering(356) 00:13:26.836 fused_ordering(357) 00:13:26.836 fused_ordering(358) 00:13:26.836 fused_ordering(359) 00:13:26.836 fused_ordering(360) 00:13:26.836 fused_ordering(361) 00:13:26.836 fused_ordering(362) 00:13:26.836 fused_ordering(363) 00:13:26.836 fused_ordering(364) 00:13:26.836 fused_ordering(365) 00:13:26.836 fused_ordering(366) 00:13:26.836 fused_ordering(367) 00:13:26.836 fused_ordering(368) 00:13:26.836 fused_ordering(369) 00:13:26.836 fused_ordering(370) 00:13:26.836 fused_ordering(371) 00:13:26.836 fused_ordering(372) 00:13:26.836 fused_ordering(373) 00:13:26.836 fused_ordering(374) 00:13:26.836 fused_ordering(375) 00:13:26.836 fused_ordering(376) 00:13:26.836 fused_ordering(377) 00:13:26.836 fused_ordering(378) 00:13:26.836 fused_ordering(379) 00:13:26.836 fused_ordering(380) 00:13:26.836 fused_ordering(381) 00:13:26.836 fused_ordering(382) 00:13:26.836 fused_ordering(383) 00:13:26.836 fused_ordering(384) 00:13:26.836 fused_ordering(385) 00:13:26.836 fused_ordering(386) 00:13:26.836 fused_ordering(387) 00:13:26.836 fused_ordering(388) 00:13:26.836 fused_ordering(389) 00:13:26.836 fused_ordering(390) 00:13:26.836 fused_ordering(391) 00:13:26.836 fused_ordering(392) 00:13:26.836 fused_ordering(393) 00:13:26.836 fused_ordering(394) 00:13:26.836 fused_ordering(395) 00:13:26.836 fused_ordering(396) 00:13:26.836 fused_ordering(397) 00:13:26.836 fused_ordering(398) 00:13:26.836 fused_ordering(399) 00:13:26.836 fused_ordering(400) 00:13:26.836 fused_ordering(401) 00:13:26.836 fused_ordering(402) 00:13:26.836 fused_ordering(403) 00:13:26.836 fused_ordering(404) 00:13:26.836 fused_ordering(405) 00:13:26.836 fused_ordering(406) 00:13:26.836 fused_ordering(407) 00:13:26.836 fused_ordering(408) 00:13:26.836 fused_ordering(409) 00:13:26.836 fused_ordering(410) 00:13:27.095 fused_ordering(411) 00:13:27.096 fused_ordering(412) 00:13:27.096 fused_ordering(413) 00:13:27.096 fused_ordering(414) 00:13:27.096 fused_ordering(415) 00:13:27.096 fused_ordering(416) 00:13:27.096 fused_ordering(417) 00:13:27.096 fused_ordering(418) 00:13:27.096 fused_ordering(419) 00:13:27.096 fused_ordering(420) 00:13:27.096 fused_ordering(421) 00:13:27.096 fused_ordering(422) 00:13:27.096 fused_ordering(423) 00:13:27.096 fused_ordering(424) 00:13:27.096 fused_ordering(425) 00:13:27.096 fused_ordering(426) 00:13:27.096 fused_ordering(427) 00:13:27.096 fused_ordering(428) 00:13:27.096 fused_ordering(429) 00:13:27.096 fused_ordering(430) 00:13:27.096 fused_ordering(431) 00:13:27.096 fused_ordering(432) 00:13:27.096 fused_ordering(433) 00:13:27.096 fused_ordering(434) 00:13:27.096 fused_ordering(435) 00:13:27.096 fused_ordering(436) 00:13:27.096 fused_ordering(437) 00:13:27.096 fused_ordering(438) 00:13:27.096 fused_ordering(439) 00:13:27.096 fused_ordering(440) 00:13:27.096 fused_ordering(441) 00:13:27.096 fused_ordering(442) 00:13:27.096 fused_ordering(443) 00:13:27.096 fused_ordering(444) 00:13:27.096 fused_ordering(445) 00:13:27.096 fused_ordering(446) 00:13:27.096 fused_ordering(447) 00:13:27.096 fused_ordering(448) 00:13:27.096 fused_ordering(449) 00:13:27.096 fused_ordering(450) 00:13:27.096 fused_ordering(451) 00:13:27.096 fused_ordering(452) 00:13:27.096 fused_ordering(453) 00:13:27.096 fused_ordering(454) 00:13:27.096 fused_ordering(455) 00:13:27.096 fused_ordering(456) 00:13:27.096 fused_ordering(457) 00:13:27.096 fused_ordering(458) 00:13:27.096 fused_ordering(459) 00:13:27.096 fused_ordering(460) 00:13:27.096 fused_ordering(461) 00:13:27.096 fused_ordering(462) 00:13:27.096 fused_ordering(463) 00:13:27.096 fused_ordering(464) 00:13:27.096 fused_ordering(465) 00:13:27.096 fused_ordering(466) 00:13:27.096 fused_ordering(467) 00:13:27.096 fused_ordering(468) 00:13:27.096 fused_ordering(469) 00:13:27.096 fused_ordering(470) 00:13:27.096 fused_ordering(471) 00:13:27.096 fused_ordering(472) 00:13:27.096 fused_ordering(473) 00:13:27.096 fused_ordering(474) 00:13:27.096 fused_ordering(475) 00:13:27.096 fused_ordering(476) 00:13:27.096 fused_ordering(477) 00:13:27.096 fused_ordering(478) 00:13:27.096 fused_ordering(479) 00:13:27.096 fused_ordering(480) 00:13:27.096 fused_ordering(481) 00:13:27.096 fused_ordering(482) 00:13:27.096 fused_ordering(483) 00:13:27.096 fused_ordering(484) 00:13:27.096 fused_ordering(485) 00:13:27.096 fused_ordering(486) 00:13:27.096 fused_ordering(487) 00:13:27.096 fused_ordering(488) 00:13:27.096 fused_ordering(489) 00:13:27.096 fused_ordering(490) 00:13:27.096 fused_ordering(491) 00:13:27.096 fused_ordering(492) 00:13:27.096 fused_ordering(493) 00:13:27.096 fused_ordering(494) 00:13:27.096 fused_ordering(495) 00:13:27.096 fused_ordering(496) 00:13:27.096 fused_ordering(497) 00:13:27.096 fused_ordering(498) 00:13:27.096 fused_ordering(499) 00:13:27.096 fused_ordering(500) 00:13:27.096 fused_ordering(501) 00:13:27.096 fused_ordering(502) 00:13:27.096 fused_ordering(503) 00:13:27.096 fused_ordering(504) 00:13:27.096 fused_ordering(505) 00:13:27.096 fused_ordering(506) 00:13:27.096 fused_ordering(507) 00:13:27.096 fused_ordering(508) 00:13:27.096 fused_ordering(509) 00:13:27.096 fused_ordering(510) 00:13:27.096 fused_ordering(511) 00:13:27.096 fused_ordering(512) 00:13:27.096 fused_ordering(513) 00:13:27.096 fused_ordering(514) 00:13:27.096 fused_ordering(515) 00:13:27.096 fused_ordering(516) 00:13:27.096 fused_ordering(517) 00:13:27.096 fused_ordering(518) 00:13:27.096 fused_ordering(519) 00:13:27.096 fused_ordering(520) 00:13:27.096 fused_ordering(521) 00:13:27.096 fused_ordering(522) 00:13:27.096 fused_ordering(523) 00:13:27.096 fused_ordering(524) 00:13:27.096 fused_ordering(525) 00:13:27.096 fused_ordering(526) 00:13:27.096 fused_ordering(527) 00:13:27.096 fused_ordering(528) 00:13:27.096 fused_ordering(529) 00:13:27.096 fused_ordering(530) 00:13:27.096 fused_ordering(531) 00:13:27.096 fused_ordering(532) 00:13:27.096 fused_ordering(533) 00:13:27.096 fused_ordering(534) 00:13:27.096 fused_ordering(535) 00:13:27.096 fused_ordering(536) 00:13:27.096 fused_ordering(537) 00:13:27.096 fused_ordering(538) 00:13:27.096 fused_ordering(539) 00:13:27.096 fused_ordering(540) 00:13:27.096 fused_ordering(541) 00:13:27.096 fused_ordering(542) 00:13:27.096 fused_ordering(543) 00:13:27.096 fused_ordering(544) 00:13:27.096 fused_ordering(545) 00:13:27.096 fused_ordering(546) 00:13:27.096 fused_ordering(547) 00:13:27.096 fused_ordering(548) 00:13:27.096 fused_ordering(549) 00:13:27.096 fused_ordering(550) 00:13:27.096 fused_ordering(551) 00:13:27.096 fused_ordering(552) 00:13:27.096 fused_ordering(553) 00:13:27.096 fused_ordering(554) 00:13:27.096 fused_ordering(555) 00:13:27.096 fused_ordering(556) 00:13:27.096 fused_ordering(557) 00:13:27.096 fused_ordering(558) 00:13:27.096 fused_ordering(559) 00:13:27.096 fused_ordering(560) 00:13:27.096 fused_ordering(561) 00:13:27.096 fused_ordering(562) 00:13:27.096 fused_ordering(563) 00:13:27.096 fused_ordering(564) 00:13:27.096 fused_ordering(565) 00:13:27.096 fused_ordering(566) 00:13:27.096 fused_ordering(567) 00:13:27.096 fused_ordering(568) 00:13:27.096 fused_ordering(569) 00:13:27.096 fused_ordering(570) 00:13:27.096 fused_ordering(571) 00:13:27.096 fused_ordering(572) 00:13:27.096 fused_ordering(573) 00:13:27.096 fused_ordering(574) 00:13:27.096 fused_ordering(575) 00:13:27.096 fused_ordering(576) 00:13:27.096 fused_ordering(577) 00:13:27.096 fused_ordering(578) 00:13:27.096 fused_ordering(579) 00:13:27.096 fused_ordering(580) 00:13:27.096 fused_ordering(581) 00:13:27.096 fused_ordering(582) 00:13:27.096 fused_ordering(583) 00:13:27.096 fused_ordering(584) 00:13:27.096 fused_ordering(585) 00:13:27.096 fused_ordering(586) 00:13:27.096 fused_ordering(587) 00:13:27.096 fused_ordering(588) 00:13:27.096 fused_ordering(589) 00:13:27.096 fused_ordering(590) 00:13:27.096 fused_ordering(591) 00:13:27.096 fused_ordering(592) 00:13:27.096 fused_ordering(593) 00:13:27.096 fused_ordering(594) 00:13:27.096 fused_ordering(595) 00:13:27.096 fused_ordering(596) 00:13:27.096 fused_ordering(597) 00:13:27.096 fused_ordering(598) 00:13:27.096 fused_ordering(599) 00:13:27.096 fused_ordering(600) 00:13:27.096 fused_ordering(601) 00:13:27.096 fused_ordering(602) 00:13:27.096 fused_ordering(603) 00:13:27.096 fused_ordering(604) 00:13:27.096 fused_ordering(605) 00:13:27.096 fused_ordering(606) 00:13:27.096 fused_ordering(607) 00:13:27.096 fused_ordering(608) 00:13:27.096 fused_ordering(609) 00:13:27.096 fused_ordering(610) 00:13:27.096 fused_ordering(611) 00:13:27.096 fused_ordering(612) 00:13:27.096 fused_ordering(613) 00:13:27.096 fused_ordering(614) 00:13:27.096 fused_ordering(615) 00:13:27.355 fused_ordering(616) 00:13:27.355 fused_ordering(617) 00:13:27.355 fused_ordering(618) 00:13:27.355 fused_ordering(619) 00:13:27.355 fused_ordering(620) 00:13:27.355 fused_ordering(621) 00:13:27.355 fused_ordering(622) 00:13:27.355 fused_ordering(623) 00:13:27.355 fused_ordering(624) 00:13:27.355 fused_ordering(625) 00:13:27.355 fused_ordering(626) 00:13:27.355 fused_ordering(627) 00:13:27.355 fused_ordering(628) 00:13:27.355 fused_ordering(629) 00:13:27.355 fused_ordering(630) 00:13:27.355 fused_ordering(631) 00:13:27.355 fused_ordering(632) 00:13:27.355 fused_ordering(633) 00:13:27.355 fused_ordering(634) 00:13:27.355 fused_ordering(635) 00:13:27.355 fused_ordering(636) 00:13:27.355 fused_ordering(637) 00:13:27.355 fused_ordering(638) 00:13:27.355 fused_ordering(639) 00:13:27.355 fused_ordering(640) 00:13:27.355 fused_ordering(641) 00:13:27.355 fused_ordering(642) 00:13:27.355 fused_ordering(643) 00:13:27.355 fused_ordering(644) 00:13:27.355 fused_ordering(645) 00:13:27.355 fused_ordering(646) 00:13:27.355 fused_ordering(647) 00:13:27.355 fused_ordering(648) 00:13:27.355 fused_ordering(649) 00:13:27.355 fused_ordering(650) 00:13:27.355 fused_ordering(651) 00:13:27.355 fused_ordering(652) 00:13:27.355 fused_ordering(653) 00:13:27.355 fused_ordering(654) 00:13:27.355 fused_ordering(655) 00:13:27.355 fused_ordering(656) 00:13:27.355 fused_ordering(657) 00:13:27.355 fused_ordering(658) 00:13:27.355 fused_ordering(659) 00:13:27.355 fused_ordering(660) 00:13:27.355 fused_ordering(661) 00:13:27.355 fused_ordering(662) 00:13:27.355 fused_ordering(663) 00:13:27.355 fused_ordering(664) 00:13:27.355 fused_ordering(665) 00:13:27.355 fused_ordering(666) 00:13:27.355 fused_ordering(667) 00:13:27.355 fused_ordering(668) 00:13:27.355 fused_ordering(669) 00:13:27.355 fused_ordering(670) 00:13:27.355 fused_ordering(671) 00:13:27.355 fused_ordering(672) 00:13:27.355 fused_ordering(673) 00:13:27.355 fused_ordering(674) 00:13:27.355 fused_ordering(675) 00:13:27.355 fused_ordering(676) 00:13:27.355 fused_ordering(677) 00:13:27.355 fused_ordering(678) 00:13:27.355 fused_ordering(679) 00:13:27.355 fused_ordering(680) 00:13:27.355 fused_ordering(681) 00:13:27.355 fused_ordering(682) 00:13:27.355 fused_ordering(683) 00:13:27.355 fused_ordering(684) 00:13:27.355 fused_ordering(685) 00:13:27.355 fused_ordering(686) 00:13:27.355 fused_ordering(687) 00:13:27.355 fused_ordering(688) 00:13:27.355 fused_ordering(689) 00:13:27.355 fused_ordering(690) 00:13:27.355 fused_ordering(691) 00:13:27.355 fused_ordering(692) 00:13:27.355 fused_ordering(693) 00:13:27.355 fused_ordering(694) 00:13:27.355 fused_ordering(695) 00:13:27.355 fused_ordering(696) 00:13:27.355 fused_ordering(697) 00:13:27.355 fused_ordering(698) 00:13:27.355 fused_ordering(699) 00:13:27.355 fused_ordering(700) 00:13:27.355 fused_ordering(701) 00:13:27.355 fused_ordering(702) 00:13:27.355 fused_ordering(703) 00:13:27.355 fused_ordering(704) 00:13:27.355 fused_ordering(705) 00:13:27.355 fused_ordering(706) 00:13:27.355 fused_ordering(707) 00:13:27.355 fused_ordering(708) 00:13:27.355 fused_ordering(709) 00:13:27.355 fused_ordering(710) 00:13:27.355 fused_ordering(711) 00:13:27.355 fused_ordering(712) 00:13:27.355 fused_ordering(713) 00:13:27.355 fused_ordering(714) 00:13:27.355 fused_ordering(715) 00:13:27.355 fused_ordering(716) 00:13:27.355 fused_ordering(717) 00:13:27.355 fused_ordering(718) 00:13:27.356 fused_ordering(719) 00:13:27.356 fused_ordering(720) 00:13:27.356 fused_ordering(721) 00:13:27.356 fused_ordering(722) 00:13:27.356 fused_ordering(723) 00:13:27.356 fused_ordering(724) 00:13:27.356 fused_ordering(725) 00:13:27.356 fused_ordering(726) 00:13:27.356 fused_ordering(727) 00:13:27.356 fused_ordering(728) 00:13:27.356 fused_ordering(729) 00:13:27.356 fused_ordering(730) 00:13:27.356 fused_ordering(731) 00:13:27.356 fused_ordering(732) 00:13:27.356 fused_ordering(733) 00:13:27.356 fused_ordering(734) 00:13:27.356 fused_ordering(735) 00:13:27.356 fused_ordering(736) 00:13:27.356 fused_ordering(737) 00:13:27.356 fused_ordering(738) 00:13:27.356 fused_ordering(739) 00:13:27.356 fused_ordering(740) 00:13:27.356 fused_ordering(741) 00:13:27.356 fused_ordering(742) 00:13:27.356 fused_ordering(743) 00:13:27.356 fused_ordering(744) 00:13:27.356 fused_ordering(745) 00:13:27.356 fused_ordering(746) 00:13:27.356 fused_ordering(747) 00:13:27.356 fused_ordering(748) 00:13:27.356 fused_ordering(749) 00:13:27.356 fused_ordering(750) 00:13:27.356 fused_ordering(751) 00:13:27.356 fused_ordering(752) 00:13:27.356 fused_ordering(753) 00:13:27.356 fused_ordering(754) 00:13:27.356 fused_ordering(755) 00:13:27.356 fused_ordering(756) 00:13:27.356 fused_ordering(757) 00:13:27.356 fused_ordering(758) 00:13:27.356 fused_ordering(759) 00:13:27.356 fused_ordering(760) 00:13:27.356 fused_ordering(761) 00:13:27.356 fused_ordering(762) 00:13:27.356 fused_ordering(763) 00:13:27.356 fused_ordering(764) 00:13:27.356 fused_ordering(765) 00:13:27.356 fused_ordering(766) 00:13:27.356 fused_ordering(767) 00:13:27.356 fused_ordering(768) 00:13:27.356 fused_ordering(769) 00:13:27.356 fused_ordering(770) 00:13:27.356 fused_ordering(771) 00:13:27.356 fused_ordering(772) 00:13:27.356 fused_ordering(773) 00:13:27.356 fused_ordering(774) 00:13:27.356 fused_ordering(775) 00:13:27.356 fused_ordering(776) 00:13:27.356 fused_ordering(777) 00:13:27.356 fused_ordering(778) 00:13:27.356 fused_ordering(779) 00:13:27.356 fused_ordering(780) 00:13:27.356 fused_ordering(781) 00:13:27.356 fused_ordering(782) 00:13:27.356 fused_ordering(783) 00:13:27.356 fused_ordering(784) 00:13:27.356 fused_ordering(785) 00:13:27.356 fused_ordering(786) 00:13:27.356 fused_ordering(787) 00:13:27.356 fused_ordering(788) 00:13:27.356 fused_ordering(789) 00:13:27.356 fused_ordering(790) 00:13:27.356 fused_ordering(791) 00:13:27.356 fused_ordering(792) 00:13:27.356 fused_ordering(793) 00:13:27.356 fused_ordering(794) 00:13:27.356 fused_ordering(795) 00:13:27.356 fused_ordering(796) 00:13:27.356 fused_ordering(797) 00:13:27.356 fused_ordering(798) 00:13:27.356 fused_ordering(799) 00:13:27.356 fused_ordering(800) 00:13:27.356 fused_ordering(801) 00:13:27.356 fused_ordering(802) 00:13:27.356 fused_ordering(803) 00:13:27.356 fused_ordering(804) 00:13:27.356 fused_ordering(805) 00:13:27.356 fused_ordering(806) 00:13:27.356 fused_ordering(807) 00:13:27.356 fused_ordering(808) 00:13:27.356 fused_ordering(809) 00:13:27.356 fused_ordering(810) 00:13:27.356 fused_ordering(811) 00:13:27.356 fused_ordering(812) 00:13:27.356 fused_ordering(813) 00:13:27.356 fused_ordering(814) 00:13:27.356 fused_ordering(815) 00:13:27.356 fused_ordering(816) 00:13:27.356 fused_ordering(817) 00:13:27.356 fused_ordering(818) 00:13:27.356 fused_ordering(819) 00:13:27.356 fused_ordering(820) 00:13:27.923 fused_ordering(821) 00:13:27.923 fused_ordering(822) 00:13:27.923 fused_ordering(823) 00:13:27.923 fused_ordering(824) 00:13:27.923 fused_ordering(825) 00:13:27.923 fused_ordering(826) 00:13:27.923 fused_ordering(827) 00:13:27.923 fused_ordering(828) 00:13:27.923 fused_ordering(829) 00:13:27.923 fused_ordering(830) 00:13:27.923 fused_ordering(831) 00:13:27.923 fused_ordering(832) 00:13:27.923 fused_ordering(833) 00:13:27.923 fused_ordering(834) 00:13:27.923 fused_ordering(835) 00:13:27.923 fused_ordering(836) 00:13:27.923 fused_ordering(837) 00:13:27.923 fused_ordering(838) 00:13:27.923 fused_ordering(839) 00:13:27.923 fused_ordering(840) 00:13:27.923 fused_ordering(841) 00:13:27.923 fused_ordering(842) 00:13:27.923 fused_ordering(843) 00:13:27.923 fused_ordering(844) 00:13:27.923 fused_ordering(845) 00:13:27.923 fused_ordering(846) 00:13:27.923 fused_ordering(847) 00:13:27.923 fused_ordering(848) 00:13:27.923 fused_ordering(849) 00:13:27.923 fused_ordering(850) 00:13:27.923 fused_ordering(851) 00:13:27.923 fused_ordering(852) 00:13:27.923 fused_ordering(853) 00:13:27.923 fused_ordering(854) 00:13:27.923 fused_ordering(855) 00:13:27.923 fused_ordering(856) 00:13:27.923 fused_ordering(857) 00:13:27.923 fused_ordering(858) 00:13:27.923 fused_ordering(859) 00:13:27.923 fused_ordering(860) 00:13:27.923 fused_ordering(861) 00:13:27.923 fused_ordering(862) 00:13:27.923 fused_ordering(863) 00:13:27.923 fused_ordering(864) 00:13:27.923 fused_ordering(865) 00:13:27.923 fused_ordering(866) 00:13:27.923 fused_ordering(867) 00:13:27.923 fused_ordering(868) 00:13:27.923 fused_ordering(869) 00:13:27.923 fused_ordering(870) 00:13:27.923 fused_ordering(871) 00:13:27.923 fused_ordering(872) 00:13:27.923 fused_ordering(873) 00:13:27.923 fused_ordering(874) 00:13:27.923 fused_ordering(875) 00:13:27.923 fused_ordering(876) 00:13:27.923 fused_ordering(877) 00:13:27.923 fused_ordering(878) 00:13:27.923 fused_ordering(879) 00:13:27.923 fused_ordering(880) 00:13:27.923 fused_ordering(881) 00:13:27.923 fused_ordering(882) 00:13:27.923 fused_ordering(883) 00:13:27.923 fused_ordering(884) 00:13:27.923 fused_ordering(885) 00:13:27.923 fused_ordering(886) 00:13:27.923 fused_ordering(887) 00:13:27.923 fused_ordering(888) 00:13:27.923 fused_ordering(889) 00:13:27.923 fused_ordering(890) 00:13:27.923 fused_ordering(891) 00:13:27.923 fused_ordering(892) 00:13:27.923 fused_ordering(893) 00:13:27.923 fused_ordering(894) 00:13:27.923 fused_ordering(895) 00:13:27.923 fused_ordering(896) 00:13:27.923 fused_ordering(897) 00:13:27.923 fused_ordering(898) 00:13:27.923 fused_ordering(899) 00:13:27.923 fused_ordering(900) 00:13:27.923 fused_ordering(901) 00:13:27.923 fused_ordering(902) 00:13:27.923 fused_ordering(903) 00:13:27.923 fused_ordering(904) 00:13:27.923 fused_ordering(905) 00:13:27.923 fused_ordering(906) 00:13:27.923 fused_ordering(907) 00:13:27.923 fused_ordering(908) 00:13:27.923 fused_ordering(909) 00:13:27.923 fused_ordering(910) 00:13:27.923 fused_ordering(911) 00:13:27.923 fused_ordering(912) 00:13:27.923 fused_ordering(913) 00:13:27.923 fused_ordering(914) 00:13:27.923 fused_ordering(915) 00:13:27.923 fused_ordering(916) 00:13:27.923 fused_ordering(917) 00:13:27.923 fused_ordering(918) 00:13:27.923 fused_ordering(919) 00:13:27.923 fused_ordering(920) 00:13:27.923 fused_ordering(921) 00:13:27.923 fused_ordering(922) 00:13:27.923 fused_ordering(923) 00:13:27.923 fused_ordering(924) 00:13:27.923 fused_ordering(925) 00:13:27.923 fused_ordering(926) 00:13:27.923 fused_ordering(927) 00:13:27.923 fused_ordering(928) 00:13:27.923 fused_ordering(929) 00:13:27.923 fused_ordering(930) 00:13:27.923 fused_ordering(931) 00:13:27.923 fused_ordering(932) 00:13:27.923 fused_ordering(933) 00:13:27.923 fused_ordering(934) 00:13:27.923 fused_ordering(935) 00:13:27.923 fused_ordering(936) 00:13:27.923 fused_ordering(937) 00:13:27.923 fused_ordering(938) 00:13:27.923 fused_ordering(939) 00:13:27.923 fused_ordering(940) 00:13:27.923 fused_ordering(941) 00:13:27.923 fused_ordering(942) 00:13:27.923 fused_ordering(943) 00:13:27.923 fused_ordering(944) 00:13:27.923 fused_ordering(945) 00:13:27.923 fused_ordering(946) 00:13:27.923 fused_ordering(947) 00:13:27.923 fused_ordering(948) 00:13:27.923 fused_ordering(949) 00:13:27.923 fused_ordering(950) 00:13:27.923 fused_ordering(951) 00:13:27.923 fused_ordering(952) 00:13:27.923 fused_ordering(953) 00:13:27.923 fused_ordering(954) 00:13:27.923 fused_ordering(955) 00:13:27.923 fused_ordering(956) 00:13:27.923 fused_ordering(957) 00:13:27.923 fused_ordering(958) 00:13:27.923 fused_ordering(959) 00:13:27.923 fused_ordering(960) 00:13:27.923 fused_ordering(961) 00:13:27.923 fused_ordering(962) 00:13:27.923 fused_ordering(963) 00:13:27.923 fused_ordering(964) 00:13:27.923 fused_ordering(965) 00:13:27.923 fused_ordering(966) 00:13:27.923 fused_ordering(967) 00:13:27.923 fused_ordering(968) 00:13:27.923 fused_ordering(969) 00:13:27.923 fused_ordering(970) 00:13:27.923 fused_ordering(971) 00:13:27.923 fused_ordering(972) 00:13:27.923 fused_ordering(973) 00:13:27.923 fused_ordering(974) 00:13:27.923 fused_ordering(975) 00:13:27.923 fused_ordering(976) 00:13:27.923 fused_ordering(977) 00:13:27.923 fused_ordering(978) 00:13:27.923 fused_ordering(979) 00:13:27.923 fused_ordering(980) 00:13:27.923 fused_ordering(981) 00:13:27.923 fused_ordering(982) 00:13:27.923 fused_ordering(983) 00:13:27.923 fused_ordering(984) 00:13:27.923 fused_ordering(985) 00:13:27.923 fused_ordering(986) 00:13:27.923 fused_ordering(987) 00:13:27.923 fused_ordering(988) 00:13:27.923 fused_ordering(989) 00:13:27.923 fused_ordering(990) 00:13:27.923 fused_ordering(991) 00:13:27.923 fused_ordering(992) 00:13:27.923 fused_ordering(993) 00:13:27.923 fused_ordering(994) 00:13:27.923 fused_ordering(995) 00:13:27.923 fused_ordering(996) 00:13:27.923 fused_ordering(997) 00:13:27.923 fused_ordering(998) 00:13:27.923 fused_ordering(999) 00:13:27.923 fused_ordering(1000) 00:13:27.923 fused_ordering(1001) 00:13:27.923 fused_ordering(1002) 00:13:27.923 fused_ordering(1003) 00:13:27.923 fused_ordering(1004) 00:13:27.923 fused_ordering(1005) 00:13:27.923 fused_ordering(1006) 00:13:27.923 fused_ordering(1007) 00:13:27.923 fused_ordering(1008) 00:13:27.923 fused_ordering(1009) 00:13:27.923 fused_ordering(1010) 00:13:27.923 fused_ordering(1011) 00:13:27.923 fused_ordering(1012) 00:13:27.923 fused_ordering(1013) 00:13:27.923 fused_ordering(1014) 00:13:27.923 fused_ordering(1015) 00:13:27.923 fused_ordering(1016) 00:13:27.923 fused_ordering(1017) 00:13:27.923 fused_ordering(1018) 00:13:27.923 fused_ordering(1019) 00:13:27.923 fused_ordering(1020) 00:13:27.923 fused_ordering(1021) 00:13:27.923 fused_ordering(1022) 00:13:27.923 fused_ordering(1023) 00:13:27.923 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:27.923 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:27.923 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:27.923 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:27.923 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:27.923 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:27.924 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:27.924 15:46:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:27.924 rmmod nvme_tcp 00:13:27.924 rmmod nvme_fabrics 00:13:27.924 rmmod nvme_keyring 00:13:27.924 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:27.924 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:27.924 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:27.924 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1948945 ']' 00:13:27.924 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1948945 00:13:27.924 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1948945 ']' 00:13:27.924 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1948945 00:13:27.924 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:27.924 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.924 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1948945 00:13:27.924 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:27.924 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:27.924 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1948945' 00:13:27.924 killing process with pid 1948945 00:13:27.924 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1948945 00:13:27.924 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1948945 00:13:28.183 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:28.183 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:28.183 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:28.183 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:28.183 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:28.183 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:28.183 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:28.183 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:28.183 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:28.183 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.183 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.183 15:46:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:30.717 00:13:30.717 real 0m10.559s 00:13:30.717 user 0m4.940s 00:13:30.717 sys 0m5.683s 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:30.717 ************************************ 00:13:30.717 END TEST nvmf_fused_ordering 00:13:30.717 ************************************ 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:30.717 ************************************ 00:13:30.717 START TEST nvmf_ns_masking 00:13:30.717 ************************************ 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:30.717 * Looking for test storage... 00:13:30.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:30.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.717 --rc genhtml_branch_coverage=1 00:13:30.717 --rc genhtml_function_coverage=1 00:13:30.717 --rc genhtml_legend=1 00:13:30.717 --rc geninfo_all_blocks=1 00:13:30.717 --rc geninfo_unexecuted_blocks=1 00:13:30.717 00:13:30.717 ' 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:30.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.717 --rc genhtml_branch_coverage=1 00:13:30.717 --rc genhtml_function_coverage=1 00:13:30.717 --rc genhtml_legend=1 00:13:30.717 --rc geninfo_all_blocks=1 00:13:30.717 --rc geninfo_unexecuted_blocks=1 00:13:30.717 00:13:30.717 ' 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:30.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.717 --rc genhtml_branch_coverage=1 00:13:30.717 --rc genhtml_function_coverage=1 00:13:30.717 --rc genhtml_legend=1 00:13:30.717 --rc geninfo_all_blocks=1 00:13:30.717 --rc geninfo_unexecuted_blocks=1 00:13:30.717 00:13:30.717 ' 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:30.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.717 --rc genhtml_branch_coverage=1 00:13:30.717 --rc genhtml_function_coverage=1 00:13:30.717 --rc genhtml_legend=1 00:13:30.717 --rc geninfo_all_blocks=1 00:13:30.717 --rc geninfo_unexecuted_blocks=1 00:13:30.717 00:13:30.717 ' 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:30.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:30.717 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=33f20a97-9cb7-4e9b-be99-33d015e12cb8 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=0af5ef71-a16e-4958-8260-882e3ed0d8ac 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=2547cc8d-ea12-4990-ab2b-d29182500ed9 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:30.718 15:46:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:37.290 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:37.290 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:37.290 Found net devices under 0000:af:00.0: cvl_0_0 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:37.290 Found net devices under 0000:af:00.1: cvl_0_1 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.290 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:37.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:13:37.291 00:13:37.291 --- 10.0.0.2 ping statistics --- 00:13:37.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.291 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:13:37.291 00:13:37.291 --- 10.0.0.1 ping statistics --- 00:13:37.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.291 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1952888 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1952888 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1952888 ']' 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:37.291 [2024-12-09 15:46:31.642266] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:13:37.291 [2024-12-09 15:46:31.642309] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.291 [2024-12-09 15:46:31.700654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.291 [2024-12-09 15:46:31.737512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.291 [2024-12-09 15:46:31.737542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.291 [2024-12-09 15:46:31.737548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.291 [2024-12-09 15:46:31.737554] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.291 [2024-12-09 15:46:31.737559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.291 [2024-12-09 15:46:31.738081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.291 15:46:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:37.291 [2024-12-09 15:46:32.048860] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.291 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:37.291 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:37.291 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:37.291 Malloc1 00:13:37.291 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:37.291 Malloc2 00:13:37.291 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:37.549 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:37.807 15:46:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.065 [2024-12-09 15:46:33.074939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.065 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:38.065 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2547cc8d-ea12-4990-ab2b-d29182500ed9 -a 10.0.0.2 -s 4420 -i 4 00:13:38.324 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:38.324 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:38.324 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.324 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:38.324 15:46:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:40.224 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:40.224 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:40.224 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.224 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:40.224 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.224 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:40.224 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:40.224 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:40.224 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:40.224 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:40.224 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:40.224 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.224 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:40.224 [ 0]:0x1 00:13:40.224 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:40.224 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.482 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5a75e35ea2a34aa8b2824d66777119fb 00:13:40.482 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5a75e35ea2a34aa8b2824d66777119fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.482 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:40.482 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:40.482 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.482 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:40.741 [ 0]:0x1 00:13:40.741 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:40.741 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.741 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5a75e35ea2a34aa8b2824d66777119fb 00:13:40.741 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5a75e35ea2a34aa8b2824d66777119fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.741 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:40.741 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.741 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:40.741 [ 1]:0x2 00:13:40.741 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:40.741 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.741 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f69aa1293b542b0a2b585664ed32971 00:13:40.741 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f69aa1293b542b0a2b585664ed32971 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.741 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:40.741 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.741 15:46:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.999 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:41.257 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:41.257 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2547cc8d-ea12-4990-ab2b-d29182500ed9 -a 10.0.0.2 -s 4420 -i 4 00:13:41.257 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:41.257 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:41.257 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:41.257 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:41.257 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:41.257 15:46:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:43.790 [ 0]:0x2 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f69aa1293b542b0a2b585664ed32971 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f69aa1293b542b0a2b585664ed32971 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:43.790 [ 0]:0x1 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5a75e35ea2a34aa8b2824d66777119fb 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5a75e35ea2a34aa8b2824d66777119fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:43.790 [ 1]:0x2 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f69aa1293b542b0a2b585664ed32971 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f69aa1293b542b0a2b585664ed32971 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.790 15:46:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:44.049 [ 0]:0x2 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f69aa1293b542b0a2b585664ed32971 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f69aa1293b542b0a2b585664ed32971 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.049 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:44.308 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:44.308 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2547cc8d-ea12-4990-ab2b-d29182500ed9 -a 10.0.0.2 -s 4420 -i 4 00:13:44.566 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:44.566 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:44.566 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:44.566 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:44.566 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:44.566 15:46:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:46.470 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:46.470 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:46.470 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:46.470 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:46.470 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:46.470 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:46.470 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:46.470 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:46.729 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:46.729 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:46.729 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:46.729 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.729 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:46.729 [ 0]:0x1 00:13:46.729 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:46.729 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.729 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5a75e35ea2a34aa8b2824d66777119fb 00:13:46.729 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5a75e35ea2a34aa8b2824d66777119fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.729 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:46.729 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.729 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:46.729 [ 1]:0x2 00:13:46.729 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:46.729 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.729 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f69aa1293b542b0a2b585664ed32971 00:13:46.729 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f69aa1293b542b0a2b585664ed32971 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.729 15:46:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:46.988 [ 0]:0x2 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f69aa1293b542b0a2b585664ed32971 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f69aa1293b542b0a2b585664ed32971 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:46.988 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:47.247 [2024-12-09 15:46:42.301041] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:47.247 request: 00:13:47.247 { 00:13:47.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.247 "nsid": 2, 00:13:47.247 "host": "nqn.2016-06.io.spdk:host1", 00:13:47.247 "method": "nvmf_ns_remove_host", 00:13:47.247 "req_id": 1 00:13:47.247 } 00:13:47.247 Got JSON-RPC error response 00:13:47.247 response: 00:13:47.247 { 00:13:47.247 "code": -32602, 00:13:47.247 "message": "Invalid parameters" 00:13:47.247 } 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:47.247 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:47.248 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:47.248 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:47.248 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:47.248 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:47.248 [ 0]:0x2 00:13:47.248 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:47.248 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:47.507 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9f69aa1293b542b0a2b585664ed32971 00:13:47.507 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9f69aa1293b542b0a2b585664ed32971 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:47.507 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:47.507 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.507 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1954718 00:13:47.507 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.507 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1954718 /var/tmp/host.sock 00:13:47.507 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:47.507 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1954718 ']' 00:13:47.507 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:47.507 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.507 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:47.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:47.507 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.507 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:47.507 [2024-12-09 15:46:42.582167] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:13:47.507 [2024-12-09 15:46:42.582212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1954718 ] 00:13:47.507 [2024-12-09 15:46:42.658771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.507 [2024-12-09 15:46:42.699157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.766 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.766 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:47.766 15:46:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.024 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:48.283 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 33f20a97-9cb7-4e9b-be99-33d015e12cb8 00:13:48.283 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:48.283 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 33F20A979CB74E9BBE9933D015E12CB8 -i 00:13:48.283 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 0af5ef71-a16e-4958-8260-882e3ed0d8ac 00:13:48.283 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:48.283 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 0AF5EF71A16E49588260882E3ED0D8AC -i 00:13:48.542 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:48.801 15:46:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:49.059 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:49.059 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:49.319 nvme0n1 00:13:49.319 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:49.319 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:49.887 nvme1n2 00:13:49.887 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:49.887 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:49.887 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:49.887 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:49.887 15:46:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:49.887 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:49.887 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:49.887 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:49.887 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:50.145 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 33f20a97-9cb7-4e9b-be99-33d015e12cb8 == \3\3\f\2\0\a\9\7\-\9\c\b\7\-\4\e\9\b\-\b\e\9\9\-\3\3\d\0\1\5\e\1\2\c\b\8 ]] 00:13:50.145 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:50.145 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:50.145 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:50.403 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 0af5ef71-a16e-4958-8260-882e3ed0d8ac == \0\a\f\5\e\f\7\1\-\a\1\6\e\-\4\9\5\8\-\8\2\6\0\-\8\8\2\e\3\e\d\0\d\8\a\c ]] 00:13:50.404 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.404 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.663 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 33f20a97-9cb7-4e9b-be99-33d015e12cb8 00:13:50.663 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:50.663 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 33F20A979CB74E9BBE9933D015E12CB8 00:13:50.663 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:50.663 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 33F20A979CB74E9BBE9933D015E12CB8 00:13:50.663 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.663 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:50.663 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.663 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:50.663 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.663 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:50.663 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.663 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:50.663 15:46:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 33F20A979CB74E9BBE9933D015E12CB8 00:13:50.922 [2024-12-09 15:46:45.991346] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:50.922 [2024-12-09 15:46:45.991374] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:50.922 [2024-12-09 15:46:45.991382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.922 request: 00:13:50.922 { 00:13:50.922 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:50.922 "namespace": { 00:13:50.922 "bdev_name": "invalid", 00:13:50.922 "nsid": 1, 00:13:50.922 "nguid": "33F20A979CB74E9BBE9933D015E12CB8", 00:13:50.922 "no_auto_visible": false, 00:13:50.922 "hide_metadata": false 00:13:50.922 }, 00:13:50.922 "method": "nvmf_subsystem_add_ns", 00:13:50.922 "req_id": 1 00:13:50.922 } 00:13:50.922 Got JSON-RPC error response 00:13:50.922 response: 00:13:50.922 { 00:13:50.922 "code": -32602, 00:13:50.922 "message": "Invalid parameters" 00:13:50.922 } 00:13:50.922 15:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:50.922 15:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:50.922 15:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:50.922 15:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:50.922 15:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 33f20a97-9cb7-4e9b-be99-33d015e12cb8 00:13:50.922 15:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:50.922 15:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 33F20A979CB74E9BBE9933D015E12CB8 -i 00:13:51.181 15:46:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:53.182 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:53.182 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:53.182 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:53.182 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:53.182 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1954718 00:13:53.182 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1954718 ']' 00:13:53.182 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1954718 00:13:53.182 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:53.182 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.182 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1954718 00:13:53.442 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:53.442 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:53.442 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1954718' 00:13:53.442 killing process with pid 1954718 00:13:53.442 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1954718 00:13:53.442 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1954718 00:13:53.701 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:53.960 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:53.960 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:53.960 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:53.960 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:53.960 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:53.960 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:53.960 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:53.960 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:53.960 rmmod nvme_tcp 00:13:53.960 rmmod nvme_fabrics 00:13:53.960 rmmod nvme_keyring 00:13:53.960 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:53.960 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:53.960 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:53.960 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1952888 ']' 00:13:53.960 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1952888 00:13:53.960 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1952888 ']' 00:13:53.960 15:46:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1952888 00:13:53.960 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:53.960 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.960 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1952888 00:13:53.960 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.960 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.960 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1952888' 00:13:53.960 killing process with pid 1952888 00:13:53.960 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1952888 00:13:53.960 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1952888 00:13:54.220 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:54.220 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:54.220 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:54.220 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:54.220 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:54.220 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:54.220 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:54.220 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:54.220 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:54.220 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.220 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.220 15:46:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.125 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:56.125 00:13:56.125 real 0m25.913s 00:13:56.125 user 0m31.050s 00:13:56.125 sys 0m6.968s 00:13:56.125 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.125 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:56.125 ************************************ 00:13:56.125 END TEST nvmf_ns_masking 00:13:56.125 ************************************ 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:56.385 ************************************ 00:13:56.385 START TEST nvmf_nvme_cli 00:13:56.385 ************************************ 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:56.385 * Looking for test storage... 00:13:56.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:56.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.385 --rc genhtml_branch_coverage=1 00:13:56.385 --rc genhtml_function_coverage=1 00:13:56.385 --rc genhtml_legend=1 00:13:56.385 --rc geninfo_all_blocks=1 00:13:56.385 --rc geninfo_unexecuted_blocks=1 00:13:56.385 00:13:56.385 ' 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:56.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.385 --rc genhtml_branch_coverage=1 00:13:56.385 --rc genhtml_function_coverage=1 00:13:56.385 --rc genhtml_legend=1 00:13:56.385 --rc geninfo_all_blocks=1 00:13:56.385 --rc geninfo_unexecuted_blocks=1 00:13:56.385 00:13:56.385 ' 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:56.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.385 --rc genhtml_branch_coverage=1 00:13:56.385 --rc genhtml_function_coverage=1 00:13:56.385 --rc genhtml_legend=1 00:13:56.385 --rc geninfo_all_blocks=1 00:13:56.385 --rc geninfo_unexecuted_blocks=1 00:13:56.385 00:13:56.385 ' 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:56.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.385 --rc genhtml_branch_coverage=1 00:13:56.385 --rc genhtml_function_coverage=1 00:13:56.385 --rc genhtml_legend=1 00:13:56.385 --rc geninfo_all_blocks=1 00:13:56.385 --rc geninfo_unexecuted_blocks=1 00:13:56.385 00:13:56.385 ' 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.385 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:56.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:56.386 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:56.646 15:46:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:03.216 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:03.216 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.216 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:03.216 Found net devices under 0000:af:00.0: cvl_0_0 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:03.217 Found net devices under 0000:af:00.1: cvl_0_1 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:03.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:14:03.217 00:14:03.217 --- 10.0.0.2 ping statistics --- 00:14:03.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.217 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:03.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:14:03.217 00:14:03.217 --- 10.0.0.1 ping statistics --- 00:14:03.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.217 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1959341 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1959341 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1959341 ']' 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.217 [2024-12-09 15:46:57.565767] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:14:03.217 [2024-12-09 15:46:57.565815] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.217 [2024-12-09 15:46:57.647925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.217 [2024-12-09 15:46:57.690255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.217 [2024-12-09 15:46:57.690290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.217 [2024-12-09 15:46:57.690297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.217 [2024-12-09 15:46:57.690303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.217 [2024-12-09 15:46:57.690308] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.217 [2024-12-09 15:46:57.691814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.217 [2024-12-09 15:46:57.691925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.217 [2024-12-09 15:46:57.691831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.217 [2024-12-09 15:46:57.691925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.217 [2024-12-09 15:46:57.841526] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.217 Malloc0 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.217 Malloc1 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.217 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.218 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:03.218 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.218 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.218 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.218 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.218 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.218 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.218 [2024-12-09 15:46:57.938553] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.218 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.218 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:03.218 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.218 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.218 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.218 15:46:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:03.218 00:14:03.218 Discovery Log Number of Records 2, Generation counter 2 00:14:03.218 =====Discovery Log Entry 0====== 00:14:03.218 trtype: tcp 00:14:03.218 adrfam: ipv4 00:14:03.218 subtype: current discovery subsystem 00:14:03.218 treq: not required 00:14:03.218 portid: 0 00:14:03.218 trsvcid: 4420 00:14:03.218 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:03.218 traddr: 10.0.0.2 00:14:03.218 eflags: explicit discovery connections, duplicate discovery information 00:14:03.218 sectype: none 00:14:03.218 =====Discovery Log Entry 1====== 00:14:03.218 trtype: tcp 00:14:03.218 adrfam: ipv4 00:14:03.218 subtype: nvme subsystem 00:14:03.218 treq: not required 00:14:03.218 portid: 0 00:14:03.218 trsvcid: 4420 00:14:03.218 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:03.218 traddr: 10.0.0.2 00:14:03.218 eflags: none 00:14:03.218 sectype: none 00:14:03.218 15:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:03.218 15:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:03.218 15:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:03.218 15:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:03.218 15:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:03.218 15:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:03.218 15:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:03.218 15:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:03.218 15:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:03.218 15:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:03.218 15:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:14:03.218 15:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:03.218 15:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:03.218 15:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:14:03.218 15:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:03.218 15:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=2 00:14:03.218 15:46:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.153 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:04.153 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:04.153 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.153 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:04.153 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:04.153 15:46:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:06.685 /dev/nvme0n2 00:14:06.685 /dev/nvme1n1 00:14:06.685 /dev/nvme1n2 ]] 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:06.685 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n1 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme1n2 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=4 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:06.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:06.686 rmmod nvme_tcp 00:14:06.686 rmmod nvme_fabrics 00:14:06.686 rmmod nvme_keyring 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1959341 ']' 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1959341 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1959341 ']' 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1959341 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1959341 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1959341' 00:14:06.686 killing process with pid 1959341 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1959341 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1959341 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.686 15:47:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.222 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:09.222 00:14:09.222 real 0m12.542s 00:14:09.222 user 0m18.379s 00:14:09.222 sys 0m5.035s 00:14:09.222 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.222 15:47:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.222 ************************************ 00:14:09.222 END TEST nvmf_nvme_cli 00:14:09.222 ************************************ 00:14:09.222 15:47:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:09.222 15:47:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:09.222 15:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:09.222 15:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.222 15:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.222 ************************************ 00:14:09.222 START TEST nvmf_vfio_user 00:14:09.222 ************************************ 00:14:09.222 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:09.222 * Looking for test storage... 00:14:09.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.222 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:09.222 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:14:09.222 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:09.222 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:09.222 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.222 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:09.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.223 --rc genhtml_branch_coverage=1 00:14:09.223 --rc genhtml_function_coverage=1 00:14:09.223 --rc genhtml_legend=1 00:14:09.223 --rc geninfo_all_blocks=1 00:14:09.223 --rc geninfo_unexecuted_blocks=1 00:14:09.223 00:14:09.223 ' 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:09.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.223 --rc genhtml_branch_coverage=1 00:14:09.223 --rc genhtml_function_coverage=1 00:14:09.223 --rc genhtml_legend=1 00:14:09.223 --rc geninfo_all_blocks=1 00:14:09.223 --rc geninfo_unexecuted_blocks=1 00:14:09.223 00:14:09.223 ' 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:09.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.223 --rc genhtml_branch_coverage=1 00:14:09.223 --rc genhtml_function_coverage=1 00:14:09.223 --rc genhtml_legend=1 00:14:09.223 --rc geninfo_all_blocks=1 00:14:09.223 --rc geninfo_unexecuted_blocks=1 00:14:09.223 00:14:09.223 ' 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:09.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.223 --rc genhtml_branch_coverage=1 00:14:09.223 --rc genhtml_function_coverage=1 00:14:09.223 --rc genhtml_legend=1 00:14:09.223 --rc geninfo_all_blocks=1 00:14:09.223 --rc geninfo_unexecuted_blocks=1 00:14:09.223 00:14:09.223 ' 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:09.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:09.223 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1960615 00:14:09.224 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1960615' 00:14:09.224 Process pid: 1960615 00:14:09.224 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:09.224 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1960615 00:14:09.224 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:09.224 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1960615 ']' 00:14:09.224 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.224 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.224 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.224 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.224 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:09.224 [2024-12-09 15:47:04.284272] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:14:09.224 [2024-12-09 15:47:04.284318] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.224 [2024-12-09 15:47:04.340496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:09.224 [2024-12-09 15:47:04.381426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.224 [2024-12-09 15:47:04.381460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.224 [2024-12-09 15:47:04.381468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.224 [2024-12-09 15:47:04.381474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.224 [2024-12-09 15:47:04.381479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.224 [2024-12-09 15:47:04.383012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.224 [2024-12-09 15:47:04.383121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.224 [2024-12-09 15:47:04.383206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.224 [2024-12-09 15:47:04.383207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.482 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.482 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:09.482 15:47:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:10.420 15:47:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:10.678 15:47:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:10.678 15:47:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:10.678 15:47:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:10.678 15:47:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:10.678 15:47:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:10.678 Malloc1 00:14:10.936 15:47:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:10.936 15:47:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:11.195 15:47:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:11.453 15:47:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:11.453 15:47:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:11.453 15:47:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:11.712 Malloc2 00:14:11.712 15:47:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:11.970 15:47:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:11.970 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:12.227 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:12.227 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:12.227 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:12.227 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:12.227 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:12.227 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:12.227 [2024-12-09 15:47:07.378897] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:14:12.227 [2024-12-09 15:47:07.378929] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1961092 ] 00:14:12.227 [2024-12-09 15:47:07.420669] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:12.227 [2024-12-09 15:47:07.427571] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:12.227 [2024-12-09 15:47:07.427593] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f544a7ee000 00:14:12.227 [2024-12-09 15:47:07.428568] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:12.227 [2024-12-09 15:47:07.429571] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:12.227 [2024-12-09 15:47:07.430577] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:12.227 [2024-12-09 15:47:07.431580] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:12.227 [2024-12-09 15:47:07.432603] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:12.227 [2024-12-09 15:47:07.433591] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:12.227 [2024-12-09 15:47:07.434598] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:12.227 [2024-12-09 15:47:07.435612] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:12.227 [2024-12-09 15:47:07.436618] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:12.227 [2024-12-09 15:47:07.436627] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f544a7e3000 00:14:12.227 [2024-12-09 15:47:07.437542] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:12.227 [2024-12-09 15:47:07.449487] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:12.227 [2024-12-09 15:47:07.449512] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:12.227 [2024-12-09 15:47:07.454727] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:12.227 [2024-12-09 15:47:07.454760] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:12.227 [2024-12-09 15:47:07.454828] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:12.227 [2024-12-09 15:47:07.454841] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:12.227 [2024-12-09 15:47:07.454846] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:12.485 [2024-12-09 15:47:07.455728] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:12.485 [2024-12-09 15:47:07.455739] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:12.485 [2024-12-09 15:47:07.455746] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:12.485 [2024-12-09 15:47:07.456730] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:12.485 [2024-12-09 15:47:07.456739] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:12.485 [2024-12-09 15:47:07.456746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:12.485 [2024-12-09 15:47:07.457737] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:12.485 [2024-12-09 15:47:07.457745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:12.485 [2024-12-09 15:47:07.458749] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:12.485 [2024-12-09 15:47:07.458757] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:12.485 [2024-12-09 15:47:07.458762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:12.485 [2024-12-09 15:47:07.458768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:12.485 [2024-12-09 15:47:07.458875] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:12.485 [2024-12-09 15:47:07.458880] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:12.485 [2024-12-09 15:47:07.458884] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:12.485 [2024-12-09 15:47:07.459753] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:12.485 [2024-12-09 15:47:07.460754] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:12.485 [2024-12-09 15:47:07.461756] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:12.485 [2024-12-09 15:47:07.462755] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:12.485 [2024-12-09 15:47:07.462817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:12.485 [2024-12-09 15:47:07.463763] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:12.485 [2024-12-09 15:47:07.463771] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:12.485 [2024-12-09 15:47:07.463775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:12.485 [2024-12-09 15:47:07.463792] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:12.485 [2024-12-09 15:47:07.463799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:12.485 [2024-12-09 15:47:07.463816] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:12.485 [2024-12-09 15:47:07.463821] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:12.485 [2024-12-09 15:47:07.463826] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:12.485 [2024-12-09 15:47:07.463838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:12.485 [2024-12-09 15:47:07.463884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:12.485 [2024-12-09 15:47:07.463893] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:12.485 [2024-12-09 15:47:07.463898] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:12.485 [2024-12-09 15:47:07.463903] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:12.485 [2024-12-09 15:47:07.463907] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:12.485 [2024-12-09 15:47:07.463912] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:12.485 [2024-12-09 15:47:07.463916] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:12.485 [2024-12-09 15:47:07.463921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:12.485 [2024-12-09 15:47:07.463928] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:12.485 [2024-12-09 15:47:07.463937] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:12.485 [2024-12-09 15:47:07.463951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:12.485 [2024-12-09 15:47:07.463961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.485 [2024-12-09 15:47:07.463969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.485 [2024-12-09 15:47:07.463976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.485 [2024-12-09 15:47:07.463983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:12.485 [2024-12-09 15:47:07.463987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:12.485 [2024-12-09 15:47:07.463995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:12.485 [2024-12-09 15:47:07.464003] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:12.485 [2024-12-09 15:47:07.464011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:12.485 [2024-12-09 15:47:07.464016] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:12.485 [2024-12-09 15:47:07.464020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:12.485 [2024-12-09 15:47:07.464027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:12.485 [2024-12-09 15:47:07.464032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:12.485 [2024-12-09 15:47:07.464041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:12.485 [2024-12-09 15:47:07.464050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:12.485 [2024-12-09 15:47:07.464101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:12.485 [2024-12-09 15:47:07.464108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:12.485 [2024-12-09 15:47:07.464115] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:12.485 [2024-12-09 15:47:07.464119] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:12.485 [2024-12-09 15:47:07.464122] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:12.485 [2024-12-09 15:47:07.464128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:12.485 [2024-12-09 15:47:07.464139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:12.485 [2024-12-09 15:47:07.464151] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:12.485 [2024-12-09 15:47:07.464159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:12.485 [2024-12-09 15:47:07.464166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:12.485 [2024-12-09 15:47:07.464172] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:12.485 [2024-12-09 15:47:07.464176] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:12.485 [2024-12-09 15:47:07.464179] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:12.485 [2024-12-09 15:47:07.464185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:12.485 [2024-12-09 15:47:07.464203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:12.485 [2024-12-09 15:47:07.464213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:12.485 [2024-12-09 15:47:07.464227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:12.485 [2024-12-09 15:47:07.464233] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:12.485 [2024-12-09 15:47:07.464237] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:12.485 [2024-12-09 15:47:07.464240] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:12.486 [2024-12-09 15:47:07.464245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:12.486 [2024-12-09 15:47:07.464259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:12.486 [2024-12-09 15:47:07.464266] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:12.486 [2024-12-09 15:47:07.464272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:12.486 [2024-12-09 15:47:07.464280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:12.486 [2024-12-09 15:47:07.464286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:12.486 [2024-12-09 15:47:07.464291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:12.486 [2024-12-09 15:47:07.464295] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:12.486 [2024-12-09 15:47:07.464300] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:12.486 [2024-12-09 15:47:07.464304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:12.486 [2024-12-09 15:47:07.464309] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:12.486 [2024-12-09 15:47:07.464324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:12.486 [2024-12-09 15:47:07.464333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:12.486 [2024-12-09 15:47:07.464343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:12.486 [2024-12-09 15:47:07.464353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:12.486 [2024-12-09 15:47:07.464363] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:12.486 [2024-12-09 15:47:07.464373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:12.486 [2024-12-09 15:47:07.464383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:12.486 [2024-12-09 15:47:07.464391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:12.486 [2024-12-09 15:47:07.464403] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:12.486 [2024-12-09 15:47:07.464407] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:12.486 [2024-12-09 15:47:07.464411] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:12.486 [2024-12-09 15:47:07.464414] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:12.486 [2024-12-09 15:47:07.464417] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:12.486 [2024-12-09 15:47:07.464422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:12.486 [2024-12-09 15:47:07.464429] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:12.486 [2024-12-09 15:47:07.464433] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:12.486 [2024-12-09 15:47:07.464436] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:12.486 [2024-12-09 15:47:07.464441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:12.486 [2024-12-09 15:47:07.464447] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:12.486 [2024-12-09 15:47:07.464452] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:12.486 [2024-12-09 15:47:07.464456] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:12.486 [2024-12-09 15:47:07.464462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:12.486 [2024-12-09 15:47:07.464468] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:12.486 [2024-12-09 15:47:07.464472] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:12.486 [2024-12-09 15:47:07.464475] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:12.486 [2024-12-09 15:47:07.464480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:12.486 [2024-12-09 15:47:07.464486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:12.486 [2024-12-09 15:47:07.464496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:12.486 [2024-12-09 15:47:07.464505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:12.486 [2024-12-09 15:47:07.464512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:12.486 ===================================================== 00:14:12.486 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:12.486 ===================================================== 00:14:12.486 Controller Capabilities/Features 00:14:12.486 ================================ 00:14:12.486 Vendor ID: 4e58 00:14:12.486 Subsystem Vendor ID: 4e58 00:14:12.486 Serial Number: SPDK1 00:14:12.486 Model Number: SPDK bdev Controller 00:14:12.486 Firmware Version: 25.01 00:14:12.486 Recommended Arb Burst: 6 00:14:12.486 IEEE OUI Identifier: 8d 6b 50 00:14:12.486 Multi-path I/O 00:14:12.486 May have multiple subsystem ports: Yes 00:14:12.486 May have multiple controllers: Yes 00:14:12.486 Associated with SR-IOV VF: No 00:14:12.486 Max Data Transfer Size: 131072 00:14:12.486 Max Number of Namespaces: 32 00:14:12.486 Max Number of I/O Queues: 127 00:14:12.486 NVMe Specification Version (VS): 1.3 00:14:12.486 NVMe Specification Version (Identify): 1.3 00:14:12.486 Maximum Queue Entries: 256 00:14:12.486 Contiguous Queues Required: Yes 00:14:12.486 Arbitration Mechanisms Supported 00:14:12.486 Weighted Round Robin: Not Supported 00:14:12.486 Vendor Specific: Not Supported 00:14:12.486 Reset Timeout: 15000 ms 00:14:12.486 Doorbell Stride: 4 bytes 00:14:12.486 NVM Subsystem Reset: Not Supported 00:14:12.486 Command Sets Supported 00:14:12.486 NVM Command Set: Supported 00:14:12.486 Boot Partition: Not Supported 00:14:12.486 Memory Page Size Minimum: 4096 bytes 00:14:12.486 Memory Page Size Maximum: 4096 bytes 00:14:12.486 Persistent Memory Region: Not Supported 00:14:12.486 Optional Asynchronous Events Supported 00:14:12.486 Namespace Attribute Notices: Supported 00:14:12.486 Firmware Activation Notices: Not Supported 00:14:12.486 ANA Change Notices: Not Supported 00:14:12.486 PLE Aggregate Log Change Notices: Not Supported 00:14:12.486 LBA Status Info Alert Notices: Not Supported 00:14:12.486 EGE Aggregate Log Change Notices: Not Supported 00:14:12.486 Normal NVM Subsystem Shutdown event: Not Supported 00:14:12.486 Zone Descriptor Change Notices: Not Supported 00:14:12.486 Discovery Log Change Notices: Not Supported 00:14:12.486 Controller Attributes 00:14:12.486 128-bit Host Identifier: Supported 00:14:12.486 Non-Operational Permissive Mode: Not Supported 00:14:12.486 NVM Sets: Not Supported 00:14:12.486 Read Recovery Levels: Not Supported 00:14:12.486 Endurance Groups: Not Supported 00:14:12.486 Predictable Latency Mode: Not Supported 00:14:12.486 Traffic Based Keep ALive: Not Supported 00:14:12.486 Namespace Granularity: Not Supported 00:14:12.486 SQ Associations: Not Supported 00:14:12.486 UUID List: Not Supported 00:14:12.486 Multi-Domain Subsystem: Not Supported 00:14:12.486 Fixed Capacity Management: Not Supported 00:14:12.486 Variable Capacity Management: Not Supported 00:14:12.486 Delete Endurance Group: Not Supported 00:14:12.486 Delete NVM Set: Not Supported 00:14:12.486 Extended LBA Formats Supported: Not Supported 00:14:12.486 Flexible Data Placement Supported: Not Supported 00:14:12.486 00:14:12.486 Controller Memory Buffer Support 00:14:12.486 ================================ 00:14:12.486 Supported: No 00:14:12.486 00:14:12.486 Persistent Memory Region Support 00:14:12.486 ================================ 00:14:12.486 Supported: No 00:14:12.486 00:14:12.486 Admin Command Set Attributes 00:14:12.486 ============================ 00:14:12.486 Security Send/Receive: Not Supported 00:14:12.486 Format NVM: Not Supported 00:14:12.486 Firmware Activate/Download: Not Supported 00:14:12.486 Namespace Management: Not Supported 00:14:12.486 Device Self-Test: Not Supported 00:14:12.486 Directives: Not Supported 00:14:12.486 NVMe-MI: Not Supported 00:14:12.486 Virtualization Management: Not Supported 00:14:12.486 Doorbell Buffer Config: Not Supported 00:14:12.486 Get LBA Status Capability: Not Supported 00:14:12.486 Command & Feature Lockdown Capability: Not Supported 00:14:12.486 Abort Command Limit: 4 00:14:12.486 Async Event Request Limit: 4 00:14:12.486 Number of Firmware Slots: N/A 00:14:12.486 Firmware Slot 1 Read-Only: N/A 00:14:12.486 Firmware Activation Without Reset: N/A 00:14:12.486 Multiple Update Detection Support: N/A 00:14:12.486 Firmware Update Granularity: No Information Provided 00:14:12.486 Per-Namespace SMART Log: No 00:14:12.486 Asymmetric Namespace Access Log Page: Not Supported 00:14:12.486 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:12.486 Command Effects Log Page: Supported 00:14:12.486 Get Log Page Extended Data: Supported 00:14:12.486 Telemetry Log Pages: Not Supported 00:14:12.486 Persistent Event Log Pages: Not Supported 00:14:12.486 Supported Log Pages Log Page: May Support 00:14:12.486 Commands Supported & Effects Log Page: Not Supported 00:14:12.486 Feature Identifiers & Effects Log Page:May Support 00:14:12.486 NVMe-MI Commands & Effects Log Page: May Support 00:14:12.486 Data Area 4 for Telemetry Log: Not Supported 00:14:12.486 Error Log Page Entries Supported: 128 00:14:12.486 Keep Alive: Supported 00:14:12.486 Keep Alive Granularity: 10000 ms 00:14:12.486 00:14:12.486 NVM Command Set Attributes 00:14:12.486 ========================== 00:14:12.486 Submission Queue Entry Size 00:14:12.486 Max: 64 00:14:12.486 Min: 64 00:14:12.486 Completion Queue Entry Size 00:14:12.486 Max: 16 00:14:12.486 Min: 16 00:14:12.486 Number of Namespaces: 32 00:14:12.486 Compare Command: Supported 00:14:12.486 Write Uncorrectable Command: Not Supported 00:14:12.486 Dataset Management Command: Supported 00:14:12.486 Write Zeroes Command: Supported 00:14:12.486 Set Features Save Field: Not Supported 00:14:12.486 Reservations: Not Supported 00:14:12.486 Timestamp: Not Supported 00:14:12.486 Copy: Supported 00:14:12.486 Volatile Write Cache: Present 00:14:12.486 Atomic Write Unit (Normal): 1 00:14:12.486 Atomic Write Unit (PFail): 1 00:14:12.486 Atomic Compare & Write Unit: 1 00:14:12.486 Fused Compare & Write: Supported 00:14:12.486 Scatter-Gather List 00:14:12.486 SGL Command Set: Supported (Dword aligned) 00:14:12.486 SGL Keyed: Not Supported 00:14:12.486 SGL Bit Bucket Descriptor: Not Supported 00:14:12.486 SGL Metadata Pointer: Not Supported 00:14:12.486 Oversized SGL: Not Supported 00:14:12.486 SGL Metadata Address: Not Supported 00:14:12.486 SGL Offset: Not Supported 00:14:12.486 Transport SGL Data Block: Not Supported 00:14:12.486 Replay Protected Memory Block: Not Supported 00:14:12.486 00:14:12.486 Firmware Slot Information 00:14:12.486 ========================= 00:14:12.486 Active slot: 1 00:14:12.486 Slot 1 Firmware Revision: 25.01 00:14:12.486 00:14:12.486 00:14:12.486 Commands Supported and Effects 00:14:12.486 ============================== 00:14:12.486 Admin Commands 00:14:12.486 -------------- 00:14:12.486 Get Log Page (02h): Supported 00:14:12.486 Identify (06h): Supported 00:14:12.486 Abort (08h): Supported 00:14:12.486 Set Features (09h): Supported 00:14:12.486 Get Features (0Ah): Supported 00:14:12.486 Asynchronous Event Request (0Ch): Supported 00:14:12.486 Keep Alive (18h): Supported 00:14:12.486 I/O Commands 00:14:12.486 ------------ 00:14:12.486 Flush (00h): Supported LBA-Change 00:14:12.486 Write (01h): Supported LBA-Change 00:14:12.486 Read (02h): Supported 00:14:12.486 Compare (05h): Supported 00:14:12.486 Write Zeroes (08h): Supported LBA-Change 00:14:12.486 Dataset Management (09h): Supported LBA-Change 00:14:12.486 Copy (19h): Supported LBA-Change 00:14:12.486 00:14:12.486 Error Log 00:14:12.486 ========= 00:14:12.486 00:14:12.486 Arbitration 00:14:12.486 =========== 00:14:12.486 Arbitration Burst: 1 00:14:12.486 00:14:12.486 Power Management 00:14:12.486 ================ 00:14:12.486 Number of Power States: 1 00:14:12.486 Current Power State: Power State #0 00:14:12.486 Power State #0: 00:14:12.486 Max Power: 0.00 W 00:14:12.486 Non-Operational State: Operational 00:14:12.486 Entry Latency: Not Reported 00:14:12.486 Exit Latency: Not Reported 00:14:12.486 Relative Read Throughput: 0 00:14:12.486 Relative Read Latency: 0 00:14:12.486 Relative Write Throughput: 0 00:14:12.486 Relative Write Latency: 0 00:14:12.486 Idle Power: Not Reported 00:14:12.486 Active Power: Not Reported 00:14:12.486 Non-Operational Permissive Mode: Not Supported 00:14:12.486 00:14:12.486 Health Information 00:14:12.486 ================== 00:14:12.486 Critical Warnings: 00:14:12.486 Available Spare Space: OK 00:14:12.486 Temperature: OK 00:14:12.486 Device Reliability: OK 00:14:12.486 Read Only: No 00:14:12.486 Volatile Memory Backup: OK 00:14:12.486 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:12.486 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:12.486 Available Spare: 0% 00:14:12.486 Available Sp[2024-12-09 15:47:07.464594] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:12.486 [2024-12-09 15:47:07.464606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:12.486 [2024-12-09 15:47:07.464632] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:12.486 [2024-12-09 15:47:07.464641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.486 [2024-12-09 15:47:07.464647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.486 [2024-12-09 15:47:07.464652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.486 [2024-12-09 15:47:07.464658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:12.486 [2024-12-09 15:47:07.464772] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:12.486 [2024-12-09 15:47:07.464781] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:12.486 [2024-12-09 15:47:07.465775] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:12.486 [2024-12-09 15:47:07.465825] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:12.486 [2024-12-09 15:47:07.465831] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:12.486 [2024-12-09 15:47:07.466779] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:12.486 [2024-12-09 15:47:07.466789] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:12.486 [2024-12-09 15:47:07.466837] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:12.486 [2024-12-09 15:47:07.469226] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:12.486 are Threshold: 0% 00:14:12.486 Life Percentage Used: 0% 00:14:12.486 Data Units Read: 0 00:14:12.486 Data Units Written: 0 00:14:12.486 Host Read Commands: 0 00:14:12.486 Host Write Commands: 0 00:14:12.486 Controller Busy Time: 0 minutes 00:14:12.486 Power Cycles: 0 00:14:12.486 Power On Hours: 0 hours 00:14:12.486 Unsafe Shutdowns: 0 00:14:12.486 Unrecoverable Media Errors: 0 00:14:12.486 Lifetime Error Log Entries: 0 00:14:12.486 Warning Temperature Time: 0 minutes 00:14:12.486 Critical Temperature Time: 0 minutes 00:14:12.486 00:14:12.486 Number of Queues 00:14:12.486 ================ 00:14:12.486 Number of I/O Submission Queues: 127 00:14:12.486 Number of I/O Completion Queues: 127 00:14:12.486 00:14:12.486 Active Namespaces 00:14:12.486 ================= 00:14:12.486 Namespace ID:1 00:14:12.486 Error Recovery Timeout: Unlimited 00:14:12.486 Command Set Identifier: NVM (00h) 00:14:12.486 Deallocate: Supported 00:14:12.486 Deallocated/Unwritten Error: Not Supported 00:14:12.486 Deallocated Read Value: Unknown 00:14:12.486 Deallocate in Write Zeroes: Not Supported 00:14:12.486 Deallocated Guard Field: 0xFFFF 00:14:12.486 Flush: Supported 00:14:12.486 Reservation: Supported 00:14:12.486 Namespace Sharing Capabilities: Multiple Controllers 00:14:12.486 Size (in LBAs): 131072 (0GiB) 00:14:12.486 Capacity (in LBAs): 131072 (0GiB) 00:14:12.486 Utilization (in LBAs): 131072 (0GiB) 00:14:12.486 NGUID: C185E26CB798401B8F5BD4FE04B17E55 00:14:12.486 UUID: c185e26c-b798-401b-8f5b-d4fe04b17e55 00:14:12.486 Thin Provisioning: Not Supported 00:14:12.486 Per-NS Atomic Units: Yes 00:14:12.486 Atomic Boundary Size (Normal): 0 00:14:12.486 Atomic Boundary Size (PFail): 0 00:14:12.486 Atomic Boundary Offset: 0 00:14:12.486 Maximum Single Source Range Length: 65535 00:14:12.486 Maximum Copy Length: 65535 00:14:12.486 Maximum Source Range Count: 1 00:14:12.486 NGUID/EUI64 Never Reused: No 00:14:12.486 Namespace Write Protected: No 00:14:12.486 Number of LBA Formats: 1 00:14:12.486 Current LBA Format: LBA Format #00 00:14:12.486 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:12.486 00:14:12.486 15:47:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:12.486 [2024-12-09 15:47:07.698070] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:17.753 Initializing NVMe Controllers 00:14:17.753 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:17.753 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:17.753 Initialization complete. Launching workers. 00:14:17.753 ======================================================== 00:14:17.753 Latency(us) 00:14:17.753 Device Information : IOPS MiB/s Average min max 00:14:17.753 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39922.62 155.95 3206.03 961.49 9614.66 00:14:17.753 ======================================================== 00:14:17.753 Total : 39922.62 155.95 3206.03 961.49 9614.66 00:14:17.753 00:14:17.753 [2024-12-09 15:47:12.719794] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:17.753 15:47:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:17.753 [2024-12-09 15:47:12.955880] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:23.024 Initializing NVMe Controllers 00:14:23.024 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:23.024 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:23.024 Initialization complete. Launching workers. 00:14:23.024 ======================================================== 00:14:23.024 Latency(us) 00:14:23.024 Device Information : IOPS MiB/s Average min max 00:14:23.024 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15871.92 62.00 8069.97 4984.49 15962.61 00:14:23.024 ======================================================== 00:14:23.024 Total : 15871.92 62.00 8069.97 4984.49 15962.61 00:14:23.024 00:14:23.024 [2024-12-09 15:47:18.005963] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:23.024 15:47:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:23.024 [2024-12-09 15:47:18.206920] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:28.300 [2024-12-09 15:47:23.285533] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:28.300 Initializing NVMe Controllers 00:14:28.300 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:28.300 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:28.300 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:28.300 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:28.300 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:28.300 Initialization complete. Launching workers. 00:14:28.300 Starting thread on core 2 00:14:28.300 Starting thread on core 3 00:14:28.300 Starting thread on core 1 00:14:28.300 15:47:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:28.559 [2024-12-09 15:47:23.574498] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:31.849 [2024-12-09 15:47:26.643775] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:31.849 Initializing NVMe Controllers 00:14:31.849 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:31.849 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:31.849 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:31.849 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:31.849 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:31.849 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:31.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:31.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:31.849 Initialization complete. Launching workers. 00:14:31.849 Starting thread on core 1 with urgent priority queue 00:14:31.849 Starting thread on core 2 with urgent priority queue 00:14:31.849 Starting thread on core 3 with urgent priority queue 00:14:31.849 Starting thread on core 0 with urgent priority queue 00:14:31.849 SPDK bdev Controller (SPDK1 ) core 0: 7984.67 IO/s 12.52 secs/100000 ios 00:14:31.849 SPDK bdev Controller (SPDK1 ) core 1: 7802.00 IO/s 12.82 secs/100000 ios 00:14:31.849 SPDK bdev Controller (SPDK1 ) core 2: 7923.00 IO/s 12.62 secs/100000 ios 00:14:31.849 SPDK bdev Controller (SPDK1 ) core 3: 7842.00 IO/s 12.75 secs/100000 ios 00:14:31.849 ======================================================== 00:14:31.849 00:14:31.849 15:47:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:31.849 [2024-12-09 15:47:26.924682] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:31.849 Initializing NVMe Controllers 00:14:31.849 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:31.849 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:31.849 Namespace ID: 1 size: 0GB 00:14:31.849 Initialization complete. 00:14:31.849 INFO: using host memory buffer for IO 00:14:31.849 Hello world! 00:14:31.849 [2024-12-09 15:47:26.960904] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:31.849 15:47:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:32.108 [2024-12-09 15:47:27.242815] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:33.046 Initializing NVMe Controllers 00:14:33.046 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:33.046 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:33.046 Initialization complete. Launching workers. 00:14:33.046 submit (in ns) avg, min, max = 5793.0, 3152.4, 3998977.1 00:14:33.046 complete (in ns) avg, min, max = 22443.7, 1711.4, 4992878.1 00:14:33.046 00:14:33.046 Submit histogram 00:14:33.046 ================ 00:14:33.046 Range in us Cumulative Count 00:14:33.046 3.139 - 3.154: 0.0120% ( 2) 00:14:33.046 3.154 - 3.170: 0.0601% ( 8) 00:14:33.046 3.170 - 3.185: 0.0962% ( 6) 00:14:33.046 3.185 - 3.200: 0.1984% ( 17) 00:14:33.046 3.200 - 3.215: 0.4630% ( 44) 00:14:33.046 3.215 - 3.230: 1.6836% ( 203) 00:14:33.046 3.230 - 3.246: 5.0809% ( 565) 00:14:33.046 3.246 - 3.261: 10.1377% ( 841) 00:14:33.046 3.261 - 3.276: 15.5854% ( 906) 00:14:33.046 3.276 - 3.291: 21.8027% ( 1034) 00:14:33.046 3.291 - 3.307: 28.5671% ( 1125) 00:14:33.046 3.307 - 3.322: 34.4958% ( 986) 00:14:33.046 3.322 - 3.337: 40.4546% ( 991) 00:14:33.046 3.337 - 3.352: 46.2029% ( 956) 00:14:33.046 3.352 - 3.368: 51.9211% ( 951) 00:14:33.046 3.368 - 3.383: 57.7536% ( 970) 00:14:33.046 3.383 - 3.398: 65.2035% ( 1239) 00:14:33.046 3.398 - 3.413: 71.8177% ( 1100) 00:14:33.046 3.413 - 3.429: 76.8204% ( 832) 00:14:33.046 3.429 - 3.444: 81.6127% ( 797) 00:14:33.046 3.444 - 3.459: 84.4026% ( 464) 00:14:33.046 3.459 - 3.474: 86.3147% ( 318) 00:14:33.046 3.474 - 3.490: 87.2888% ( 162) 00:14:33.046 3.490 - 3.505: 87.7939% ( 84) 00:14:33.046 3.505 - 3.520: 88.1486% ( 59) 00:14:33.046 3.520 - 3.535: 88.6958% ( 91) 00:14:33.046 3.535 - 3.550: 89.3873% ( 115) 00:14:33.046 3.550 - 3.566: 90.1329% ( 124) 00:14:33.046 3.566 - 3.581: 91.0529% ( 153) 00:14:33.046 3.581 - 3.596: 92.0630% ( 168) 00:14:33.046 3.596 - 3.611: 93.0491% ( 164) 00:14:33.046 3.611 - 3.627: 93.9631% ( 152) 00:14:33.046 3.627 - 3.642: 94.8830% ( 153) 00:14:33.046 3.642 - 3.657: 95.8992% ( 169) 00:14:33.046 3.657 - 3.672: 96.7771% ( 146) 00:14:33.046 3.672 - 3.688: 97.5287% ( 125) 00:14:33.046 3.688 - 3.703: 98.0518% ( 87) 00:14:33.046 3.703 - 3.718: 98.4908% ( 73) 00:14:33.046 3.718 - 3.733: 98.8816% ( 65) 00:14:33.046 3.733 - 3.749: 99.1642% ( 47) 00:14:33.046 3.749 - 3.764: 99.3626% ( 33) 00:14:33.046 3.764 - 3.779: 99.4829% ( 20) 00:14:33.046 3.779 - 3.794: 99.5911% ( 18) 00:14:33.046 3.794 - 3.810: 99.6092% ( 3) 00:14:33.046 3.810 - 3.825: 99.6332% ( 4) 00:14:33.046 3.825 - 3.840: 99.6452% ( 2) 00:14:33.046 3.840 - 3.855: 99.6573% ( 2) 00:14:33.046 3.855 - 3.870: 99.6633% ( 1) 00:14:33.046 3.870 - 3.886: 99.6693% ( 1) 00:14:33.046 4.053 - 4.084: 99.6753% ( 1) 00:14:33.046 5.333 - 5.364: 99.6813% ( 1) 00:14:33.046 5.425 - 5.455: 99.6873% ( 1) 00:14:33.046 5.455 - 5.486: 99.6933% ( 1) 00:14:33.046 5.516 - 5.547: 99.6994% ( 1) 00:14:33.046 5.547 - 5.577: 99.7054% ( 1) 00:14:33.046 5.577 - 5.608: 99.7174% ( 2) 00:14:33.046 5.821 - 5.851: 99.7234% ( 1) 00:14:33.046 6.004 - 6.034: 99.7294% ( 1) 00:14:33.046 6.095 - 6.126: 99.7354% ( 1) 00:14:33.046 6.705 - 6.735: 99.7414% ( 1) 00:14:33.046 6.735 - 6.766: 99.7475% ( 1) 00:14:33.046 6.766 - 6.796: 99.7535% ( 1) 00:14:33.046 6.796 - 6.827: 99.7595% ( 1) 00:14:33.046 6.827 - 6.857: 99.7715% ( 2) 00:14:33.046 6.918 - 6.949: 99.7775% ( 1) 00:14:33.046 6.949 - 6.979: 99.7895% ( 2) 00:14:33.046 6.979 - 7.010: 99.7956% ( 1) 00:14:33.046 7.040 - 7.070: 99.8136% ( 3) 00:14:33.046 7.284 - 7.314: 99.8196% ( 1) 00:14:33.046 7.497 - 7.528: 99.8256% ( 1) 00:14:33.046 7.528 - 7.558: 99.8316% ( 1) 00:14:33.046 7.558 - 7.589: 99.8377% ( 1) 00:14:33.046 7.650 - 7.680: 99.8497% ( 2) 00:14:33.046 7.680 - 7.710: 99.8557% ( 1) 00:14:33.046 7.710 - 7.741: 99.8617% ( 1) 00:14:33.046 7.802 - 7.863: 99.8677% ( 1) 00:14:33.046 7.863 - 7.924: 99.8737% ( 1) 00:14:33.046 7.924 - 7.985: 99.8797% ( 1) 00:14:33.046 7.985 - 8.046: 99.8858% ( 1) 00:14:33.046 [2024-12-09 15:47:28.260881] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:33.306 8.168 - 8.229: 99.8918% ( 1) 00:14:33.306 8.229 - 8.290: 99.8978% ( 1) 00:14:33.306 8.411 - 8.472: 99.9038% ( 1) 00:14:33.306 8.960 - 9.021: 99.9098% ( 1) 00:14:33.306 9.935 - 9.996: 99.9158% ( 1) 00:14:33.306 13.714 - 13.775: 99.9218% ( 1) 00:14:33.306 14.750 - 14.811: 99.9278% ( 1) 00:14:33.306 19.017 - 19.139: 99.9399% ( 2) 00:14:33.306 3994.575 - 4025.783: 100.0000% ( 10) 00:14:33.306 00:14:33.306 Complete histogram 00:14:33.306 ================== 00:14:33.306 Range in us Cumulative Count 00:14:33.306 1.707 - 1.714: 0.0180% ( 3) 00:14:33.306 1.714 - 1.722: 0.1864% ( 28) 00:14:33.306 1.722 - 1.730: 0.3427% ( 26) 00:14:33.306 1.730 - 1.737: 0.4089% ( 11) 00:14:33.306 1.737 - 1.745: 0.4450% ( 6) 00:14:33.306 1.745 - 1.752: 0.4931% ( 8) 00:14:33.306 1.752 - 1.760: 0.9741% ( 80) 00:14:33.306 1.760 - 1.768: 8.2978% ( 1218) 00:14:33.306 1.768 - 1.775: 29.2346% ( 3482) 00:14:33.306 1.775 - 1.783: 44.0803% ( 2469) 00:14:33.306 1.783 - 1.790: 48.4276% ( 723) 00:14:33.306 1.790 - 1.798: 50.2856% ( 309) 00:14:33.306 1.798 - 1.806: 51.8490% ( 260) 00:14:33.306 1.806 - 1.813: 52.9794% ( 188) 00:14:33.306 1.813 - 1.821: 58.3849% ( 899) 00:14:33.306 1.821 - 1.829: 73.3750% ( 2493) 00:14:33.306 1.829 - 1.836: 86.9160% ( 2252) 00:14:33.306 1.836 - 1.844: 92.5861% ( 943) 00:14:33.306 1.844 - 1.851: 95.0815% ( 415) 00:14:33.306 1.851 - 1.859: 96.4524% ( 228) 00:14:33.306 1.859 - 1.867: 97.2221% ( 128) 00:14:33.306 1.867 - 1.874: 97.4265% ( 34) 00:14:33.306 1.874 - 1.882: 97.5407% ( 19) 00:14:33.306 1.882 - 1.890: 97.8474% ( 51) 00:14:33.306 1.890 - 1.897: 98.1901% ( 57) 00:14:33.306 1.897 - 1.905: 98.6291% ( 73) 00:14:33.306 1.905 - 1.912: 98.9538% ( 54) 00:14:33.306 1.912 - 1.920: 99.1762% ( 37) 00:14:33.306 1.920 - 1.928: 99.2424% ( 11) 00:14:33.306 1.928 - 1.935: 99.2544% ( 2) 00:14:33.306 1.935 - 1.943: 99.2664% ( 2) 00:14:33.306 1.943 - 1.950: 99.2724% ( 1) 00:14:33.306 1.950 - 1.966: 99.2785% ( 1) 00:14:33.306 1.966 - 1.981: 99.2845% ( 1) 00:14:33.306 2.210 - 2.225: 99.2905% ( 1) 00:14:33.306 2.270 - 2.286: 99.2965% ( 1) 00:14:33.306 3.642 - 3.657: 99.3025% ( 1) 00:14:33.306 3.657 - 3.672: 99.3085% ( 1) 00:14:33.306 3.749 - 3.764: 99.3145% ( 1) 00:14:33.306 3.764 - 3.779: 99.3205% ( 1) 00:14:33.306 3.779 - 3.794: 99.3266% ( 1) 00:14:33.306 3.886 - 3.901: 99.3326% ( 1) 00:14:33.306 4.785 - 4.815: 99.3386% ( 1) 00:14:33.306 4.815 - 4.846: 99.3446% ( 1) 00:14:33.306 4.876 - 4.907: 99.3506% ( 1) 00:14:33.306 4.998 - 5.029: 99.3566% ( 1) 00:14:33.306 5.120 - 5.150: 99.3626% ( 1) 00:14:33.306 5.150 - 5.181: 99.3686% ( 1) 00:14:33.306 5.272 - 5.303: 99.3807% ( 2) 00:14:33.306 5.394 - 5.425: 99.3867% ( 1) 00:14:33.306 5.669 - 5.699: 99.3927% ( 1) 00:14:33.306 5.760 - 5.790: 99.3987% ( 1) 00:14:33.306 5.821 - 5.851: 99.4107% ( 2) 00:14:33.306 5.912 - 5.943: 99.4168% ( 1) 00:14:33.306 6.004 - 6.034: 99.4348% ( 3) 00:14:33.306 6.461 - 6.491: 99.4408% ( 1) 00:14:33.306 7.070 - 7.101: 99.4468% ( 1) 00:14:33.306 7.528 - 7.558: 99.4528% ( 1) 00:14:33.306 7.619 - 7.650: 99.4588% ( 1) 00:14:33.306 8.594 - 8.655: 99.4649% ( 1) 00:14:33.306 12.190 - 12.251: 99.4709% ( 1) 00:14:33.306 12.434 - 12.495: 99.4769% ( 1) 00:14:33.306 1849.051 - 1856.853: 99.4829% ( 1) 00:14:33.306 1997.288 - 2012.891: 99.4889% ( 1) 00:14:33.306 2777.478 - 2793.082: 99.4949% ( 1) 00:14:33.306 3994.575 - 4025.783: 99.9940% ( 83) 00:14:33.306 4962.011 - 4993.219: 100.0000% ( 1) 00:14:33.306 00:14:33.307 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:33.307 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:33.307 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:33.307 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:33.307 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:33.307 [ 00:14:33.307 { 00:14:33.307 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:33.307 "subtype": "Discovery", 00:14:33.307 "listen_addresses": [], 00:14:33.307 "allow_any_host": true, 00:14:33.307 "hosts": [] 00:14:33.307 }, 00:14:33.307 { 00:14:33.307 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:33.307 "subtype": "NVMe", 00:14:33.307 "listen_addresses": [ 00:14:33.307 { 00:14:33.307 "trtype": "VFIOUSER", 00:14:33.307 "adrfam": "IPv4", 00:14:33.307 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:33.307 "trsvcid": "0" 00:14:33.307 } 00:14:33.307 ], 00:14:33.307 "allow_any_host": true, 00:14:33.307 "hosts": [], 00:14:33.307 "serial_number": "SPDK1", 00:14:33.307 "model_number": "SPDK bdev Controller", 00:14:33.307 "max_namespaces": 32, 00:14:33.307 "min_cntlid": 1, 00:14:33.307 "max_cntlid": 65519, 00:14:33.307 "namespaces": [ 00:14:33.307 { 00:14:33.307 "nsid": 1, 00:14:33.307 "bdev_name": "Malloc1", 00:14:33.307 "name": "Malloc1", 00:14:33.307 "nguid": "C185E26CB798401B8F5BD4FE04B17E55", 00:14:33.307 "uuid": "c185e26c-b798-401b-8f5b-d4fe04b17e55" 00:14:33.307 } 00:14:33.307 ] 00:14:33.307 }, 00:14:33.307 { 00:14:33.307 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:33.307 "subtype": "NVMe", 00:14:33.307 "listen_addresses": [ 00:14:33.307 { 00:14:33.307 "trtype": "VFIOUSER", 00:14:33.307 "adrfam": "IPv4", 00:14:33.307 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:33.307 "trsvcid": "0" 00:14:33.307 } 00:14:33.307 ], 00:14:33.307 "allow_any_host": true, 00:14:33.307 "hosts": [], 00:14:33.307 "serial_number": "SPDK2", 00:14:33.307 "model_number": "SPDK bdev Controller", 00:14:33.307 "max_namespaces": 32, 00:14:33.307 "min_cntlid": 1, 00:14:33.307 "max_cntlid": 65519, 00:14:33.307 "namespaces": [ 00:14:33.307 { 00:14:33.307 "nsid": 1, 00:14:33.307 "bdev_name": "Malloc2", 00:14:33.307 "name": "Malloc2", 00:14:33.307 "nguid": "CCBEC4024A5546E2AFF412795B846D06", 00:14:33.307 "uuid": "ccbec402-4a55-46e2-aff4-12795b846d06" 00:14:33.307 } 00:14:33.307 ] 00:14:33.307 } 00:14:33.307 ] 00:14:33.307 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:33.307 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:33.307 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1964513 00:14:33.307 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:33.307 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:33.307 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:33.307 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:33.307 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:33.307 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:33.307 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:33.566 [2024-12-09 15:47:28.657648] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:33.566 Malloc3 00:14:33.566 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:33.825 [2024-12-09 15:47:28.923626] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:33.825 15:47:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:33.825 Asynchronous Event Request test 00:14:33.825 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:33.825 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:33.825 Registering asynchronous event callbacks... 00:14:33.825 Starting namespace attribute notice tests for all controllers... 00:14:33.825 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:33.825 aer_cb - Changed Namespace 00:14:33.825 Cleaning up... 00:14:34.085 [ 00:14:34.085 { 00:14:34.085 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:34.085 "subtype": "Discovery", 00:14:34.085 "listen_addresses": [], 00:14:34.085 "allow_any_host": true, 00:14:34.085 "hosts": [] 00:14:34.085 }, 00:14:34.085 { 00:14:34.085 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:34.085 "subtype": "NVMe", 00:14:34.085 "listen_addresses": [ 00:14:34.085 { 00:14:34.085 "trtype": "VFIOUSER", 00:14:34.085 "adrfam": "IPv4", 00:14:34.085 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:34.085 "trsvcid": "0" 00:14:34.085 } 00:14:34.085 ], 00:14:34.085 "allow_any_host": true, 00:14:34.085 "hosts": [], 00:14:34.085 "serial_number": "SPDK1", 00:14:34.085 "model_number": "SPDK bdev Controller", 00:14:34.085 "max_namespaces": 32, 00:14:34.085 "min_cntlid": 1, 00:14:34.085 "max_cntlid": 65519, 00:14:34.085 "namespaces": [ 00:14:34.085 { 00:14:34.085 "nsid": 1, 00:14:34.085 "bdev_name": "Malloc1", 00:14:34.085 "name": "Malloc1", 00:14:34.085 "nguid": "C185E26CB798401B8F5BD4FE04B17E55", 00:14:34.085 "uuid": "c185e26c-b798-401b-8f5b-d4fe04b17e55" 00:14:34.085 }, 00:14:34.085 { 00:14:34.085 "nsid": 2, 00:14:34.085 "bdev_name": "Malloc3", 00:14:34.085 "name": "Malloc3", 00:14:34.085 "nguid": "7C492AAD0E284A379B24CCEF97D61091", 00:14:34.085 "uuid": "7c492aad-0e28-4a37-9b24-ccef97d61091" 00:14:34.085 } 00:14:34.085 ] 00:14:34.085 }, 00:14:34.085 { 00:14:34.085 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:34.085 "subtype": "NVMe", 00:14:34.085 "listen_addresses": [ 00:14:34.085 { 00:14:34.085 "trtype": "VFIOUSER", 00:14:34.085 "adrfam": "IPv4", 00:14:34.085 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:34.085 "trsvcid": "0" 00:14:34.085 } 00:14:34.085 ], 00:14:34.085 "allow_any_host": true, 00:14:34.085 "hosts": [], 00:14:34.085 "serial_number": "SPDK2", 00:14:34.085 "model_number": "SPDK bdev Controller", 00:14:34.085 "max_namespaces": 32, 00:14:34.085 "min_cntlid": 1, 00:14:34.085 "max_cntlid": 65519, 00:14:34.085 "namespaces": [ 00:14:34.085 { 00:14:34.085 "nsid": 1, 00:14:34.085 "bdev_name": "Malloc2", 00:14:34.085 "name": "Malloc2", 00:14:34.085 "nguid": "CCBEC4024A5546E2AFF412795B846D06", 00:14:34.085 "uuid": "ccbec402-4a55-46e2-aff4-12795b846d06" 00:14:34.085 } 00:14:34.085 ] 00:14:34.085 } 00:14:34.085 ] 00:14:34.085 15:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1964513 00:14:34.085 15:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:34.085 15:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:34.085 15:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:34.085 15:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:34.085 [2024-12-09 15:47:29.172934] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:14:34.085 [2024-12-09 15:47:29.172967] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1964724 ] 00:14:34.085 [2024-12-09 15:47:29.211597] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:34.085 [2024-12-09 15:47:29.220473] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:34.085 [2024-12-09 15:47:29.220497] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3c029ac000 00:14:34.085 [2024-12-09 15:47:29.221471] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:34.085 [2024-12-09 15:47:29.222476] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:34.085 [2024-12-09 15:47:29.223484] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:34.085 [2024-12-09 15:47:29.224495] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:34.085 [2024-12-09 15:47:29.225504] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:34.085 [2024-12-09 15:47:29.226505] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:34.085 [2024-12-09 15:47:29.227512] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:34.085 [2024-12-09 15:47:29.228516] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:34.085 [2024-12-09 15:47:29.229524] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:34.085 [2024-12-09 15:47:29.229535] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3c029a1000 00:14:34.085 [2024-12-09 15:47:29.230449] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:34.085 [2024-12-09 15:47:29.244474] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:34.085 [2024-12-09 15:47:29.244500] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:34.085 [2024-12-09 15:47:29.246563] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:34.085 [2024-12-09 15:47:29.246597] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:34.085 [2024-12-09 15:47:29.246665] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:34.085 [2024-12-09 15:47:29.246676] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:34.085 [2024-12-09 15:47:29.246681] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:34.085 [2024-12-09 15:47:29.247567] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:34.086 [2024-12-09 15:47:29.247576] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:34.086 [2024-12-09 15:47:29.247583] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:34.086 [2024-12-09 15:47:29.248570] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:34.086 [2024-12-09 15:47:29.248579] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:34.086 [2024-12-09 15:47:29.248588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:34.086 [2024-12-09 15:47:29.249579] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:34.086 [2024-12-09 15:47:29.249588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:34.086 [2024-12-09 15:47:29.250591] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:34.086 [2024-12-09 15:47:29.250599] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:34.086 [2024-12-09 15:47:29.250604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:34.086 [2024-12-09 15:47:29.250610] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:34.086 [2024-12-09 15:47:29.250718] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:34.086 [2024-12-09 15:47:29.250722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:34.086 [2024-12-09 15:47:29.250727] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:34.086 [2024-12-09 15:47:29.251595] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:34.086 [2024-12-09 15:47:29.252610] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:34.086 [2024-12-09 15:47:29.253614] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:34.086 [2024-12-09 15:47:29.254624] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:34.086 [2024-12-09 15:47:29.254663] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:34.086 [2024-12-09 15:47:29.255638] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:34.086 [2024-12-09 15:47:29.255647] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:34.086 [2024-12-09 15:47:29.255652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:34.086 [2024-12-09 15:47:29.255668] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:34.086 [2024-12-09 15:47:29.255675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:34.086 [2024-12-09 15:47:29.255689] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:34.086 [2024-12-09 15:47:29.255694] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:34.086 [2024-12-09 15:47:29.255697] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.086 [2024-12-09 15:47:29.255708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:34.086 [2024-12-09 15:47:29.266225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:34.086 [2024-12-09 15:47:29.266242] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:34.086 [2024-12-09 15:47:29.266248] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:34.086 [2024-12-09 15:47:29.266252] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:34.086 [2024-12-09 15:47:29.266257] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:34.086 [2024-12-09 15:47:29.266261] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:34.086 [2024-12-09 15:47:29.266265] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:34.086 [2024-12-09 15:47:29.266269] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:34.086 [2024-12-09 15:47:29.266276] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:34.086 [2024-12-09 15:47:29.266285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:34.086 [2024-12-09 15:47:29.274223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:34.086 [2024-12-09 15:47:29.274242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.086 [2024-12-09 15:47:29.274249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.086 [2024-12-09 15:47:29.274257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.086 [2024-12-09 15:47:29.274264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.086 [2024-12-09 15:47:29.274268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:34.086 [2024-12-09 15:47:29.274277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:34.086 [2024-12-09 15:47:29.274285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:34.086 [2024-12-09 15:47:29.282230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:34.086 [2024-12-09 15:47:29.282237] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:34.086 [2024-12-09 15:47:29.282242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:34.086 [2024-12-09 15:47:29.282248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:34.086 [2024-12-09 15:47:29.282253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:34.086 [2024-12-09 15:47:29.282260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:34.086 [2024-12-09 15:47:29.290223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:34.086 [2024-12-09 15:47:29.290279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:34.086 [2024-12-09 15:47:29.290289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:34.086 [2024-12-09 15:47:29.290295] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:34.086 [2024-12-09 15:47:29.290300] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:34.086 [2024-12-09 15:47:29.290303] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.086 [2024-12-09 15:47:29.290309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:34.086 [2024-12-09 15:47:29.298221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:34.086 [2024-12-09 15:47:29.298232] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:34.086 [2024-12-09 15:47:29.298241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:34.086 [2024-12-09 15:47:29.298248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:34.086 [2024-12-09 15:47:29.298254] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:34.086 [2024-12-09 15:47:29.298258] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:34.086 [2024-12-09 15:47:29.298261] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.086 [2024-12-09 15:47:29.298266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:34.086 [2024-12-09 15:47:29.306222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:34.086 [2024-12-09 15:47:29.306236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:34.086 [2024-12-09 15:47:29.306243] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:34.086 [2024-12-09 15:47:29.306249] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:34.086 [2024-12-09 15:47:29.306253] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:34.086 [2024-12-09 15:47:29.306256] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.086 [2024-12-09 15:47:29.306262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:34.347 [2024-12-09 15:47:29.314222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:34.347 [2024-12-09 15:47:29.314233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:34.347 [2024-12-09 15:47:29.314239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:34.347 [2024-12-09 15:47:29.314246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:34.347 [2024-12-09 15:47:29.314252] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:34.347 [2024-12-09 15:47:29.314257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:34.347 [2024-12-09 15:47:29.314263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:34.347 [2024-12-09 15:47:29.314268] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:34.347 [2024-12-09 15:47:29.314272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:34.347 [2024-12-09 15:47:29.314276] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:34.347 [2024-12-09 15:47:29.314291] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:34.347 [2024-12-09 15:47:29.322222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:34.347 [2024-12-09 15:47:29.322235] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:34.347 [2024-12-09 15:47:29.330222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:34.347 [2024-12-09 15:47:29.330234] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:34.347 [2024-12-09 15:47:29.338222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:34.347 [2024-12-09 15:47:29.338234] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:34.347 [2024-12-09 15:47:29.346222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:34.347 [2024-12-09 15:47:29.346237] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:34.347 [2024-12-09 15:47:29.346242] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:34.347 [2024-12-09 15:47:29.346245] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:34.347 [2024-12-09 15:47:29.346248] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:34.347 [2024-12-09 15:47:29.346251] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:34.347 [2024-12-09 15:47:29.346257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:34.347 [2024-12-09 15:47:29.346263] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:34.347 [2024-12-09 15:47:29.346267] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:34.347 [2024-12-09 15:47:29.346270] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.347 [2024-12-09 15:47:29.346275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:34.347 [2024-12-09 15:47:29.346282] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:34.347 [2024-12-09 15:47:29.346285] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:34.347 [2024-12-09 15:47:29.346288] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.347 [2024-12-09 15:47:29.346294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:34.347 [2024-12-09 15:47:29.346300] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:34.347 [2024-12-09 15:47:29.346306] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:34.347 [2024-12-09 15:47:29.346309] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.347 [2024-12-09 15:47:29.346314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:34.347 [2024-12-09 15:47:29.354223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:34.347 [2024-12-09 15:47:29.354236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:34.347 [2024-12-09 15:47:29.354246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:34.347 [2024-12-09 15:47:29.354252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:34.347 ===================================================== 00:14:34.347 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:34.347 ===================================================== 00:14:34.347 Controller Capabilities/Features 00:14:34.347 ================================ 00:14:34.348 Vendor ID: 4e58 00:14:34.348 Subsystem Vendor ID: 4e58 00:14:34.348 Serial Number: SPDK2 00:14:34.348 Model Number: SPDK bdev Controller 00:14:34.348 Firmware Version: 25.01 00:14:34.348 Recommended Arb Burst: 6 00:14:34.348 IEEE OUI Identifier: 8d 6b 50 00:14:34.348 Multi-path I/O 00:14:34.348 May have multiple subsystem ports: Yes 00:14:34.348 May have multiple controllers: Yes 00:14:34.348 Associated with SR-IOV VF: No 00:14:34.348 Max Data Transfer Size: 131072 00:14:34.348 Max Number of Namespaces: 32 00:14:34.348 Max Number of I/O Queues: 127 00:14:34.348 NVMe Specification Version (VS): 1.3 00:14:34.348 NVMe Specification Version (Identify): 1.3 00:14:34.348 Maximum Queue Entries: 256 00:14:34.348 Contiguous Queues Required: Yes 00:14:34.348 Arbitration Mechanisms Supported 00:14:34.348 Weighted Round Robin: Not Supported 00:14:34.348 Vendor Specific: Not Supported 00:14:34.348 Reset Timeout: 15000 ms 00:14:34.348 Doorbell Stride: 4 bytes 00:14:34.348 NVM Subsystem Reset: Not Supported 00:14:34.348 Command Sets Supported 00:14:34.348 NVM Command Set: Supported 00:14:34.348 Boot Partition: Not Supported 00:14:34.348 Memory Page Size Minimum: 4096 bytes 00:14:34.348 Memory Page Size Maximum: 4096 bytes 00:14:34.348 Persistent Memory Region: Not Supported 00:14:34.348 Optional Asynchronous Events Supported 00:14:34.348 Namespace Attribute Notices: Supported 00:14:34.348 Firmware Activation Notices: Not Supported 00:14:34.348 ANA Change Notices: Not Supported 00:14:34.348 PLE Aggregate Log Change Notices: Not Supported 00:14:34.348 LBA Status Info Alert Notices: Not Supported 00:14:34.348 EGE Aggregate Log Change Notices: Not Supported 00:14:34.348 Normal NVM Subsystem Shutdown event: Not Supported 00:14:34.348 Zone Descriptor Change Notices: Not Supported 00:14:34.348 Discovery Log Change Notices: Not Supported 00:14:34.348 Controller Attributes 00:14:34.348 128-bit Host Identifier: Supported 00:14:34.348 Non-Operational Permissive Mode: Not Supported 00:14:34.348 NVM Sets: Not Supported 00:14:34.348 Read Recovery Levels: Not Supported 00:14:34.348 Endurance Groups: Not Supported 00:14:34.348 Predictable Latency Mode: Not Supported 00:14:34.348 Traffic Based Keep ALive: Not Supported 00:14:34.348 Namespace Granularity: Not Supported 00:14:34.348 SQ Associations: Not Supported 00:14:34.348 UUID List: Not Supported 00:14:34.348 Multi-Domain Subsystem: Not Supported 00:14:34.348 Fixed Capacity Management: Not Supported 00:14:34.348 Variable Capacity Management: Not Supported 00:14:34.348 Delete Endurance Group: Not Supported 00:14:34.348 Delete NVM Set: Not Supported 00:14:34.348 Extended LBA Formats Supported: Not Supported 00:14:34.348 Flexible Data Placement Supported: Not Supported 00:14:34.348 00:14:34.348 Controller Memory Buffer Support 00:14:34.348 ================================ 00:14:34.348 Supported: No 00:14:34.348 00:14:34.348 Persistent Memory Region Support 00:14:34.348 ================================ 00:14:34.348 Supported: No 00:14:34.348 00:14:34.348 Admin Command Set Attributes 00:14:34.348 ============================ 00:14:34.348 Security Send/Receive: Not Supported 00:14:34.348 Format NVM: Not Supported 00:14:34.348 Firmware Activate/Download: Not Supported 00:14:34.348 Namespace Management: Not Supported 00:14:34.348 Device Self-Test: Not Supported 00:14:34.348 Directives: Not Supported 00:14:34.348 NVMe-MI: Not Supported 00:14:34.348 Virtualization Management: Not Supported 00:14:34.348 Doorbell Buffer Config: Not Supported 00:14:34.348 Get LBA Status Capability: Not Supported 00:14:34.348 Command & Feature Lockdown Capability: Not Supported 00:14:34.348 Abort Command Limit: 4 00:14:34.348 Async Event Request Limit: 4 00:14:34.348 Number of Firmware Slots: N/A 00:14:34.348 Firmware Slot 1 Read-Only: N/A 00:14:34.348 Firmware Activation Without Reset: N/A 00:14:34.348 Multiple Update Detection Support: N/A 00:14:34.348 Firmware Update Granularity: No Information Provided 00:14:34.348 Per-Namespace SMART Log: No 00:14:34.348 Asymmetric Namespace Access Log Page: Not Supported 00:14:34.348 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:34.348 Command Effects Log Page: Supported 00:14:34.348 Get Log Page Extended Data: Supported 00:14:34.348 Telemetry Log Pages: Not Supported 00:14:34.348 Persistent Event Log Pages: Not Supported 00:14:34.348 Supported Log Pages Log Page: May Support 00:14:34.348 Commands Supported & Effects Log Page: Not Supported 00:14:34.348 Feature Identifiers & Effects Log Page:May Support 00:14:34.348 NVMe-MI Commands & Effects Log Page: May Support 00:14:34.348 Data Area 4 for Telemetry Log: Not Supported 00:14:34.348 Error Log Page Entries Supported: 128 00:14:34.348 Keep Alive: Supported 00:14:34.348 Keep Alive Granularity: 10000 ms 00:14:34.348 00:14:34.348 NVM Command Set Attributes 00:14:34.348 ========================== 00:14:34.348 Submission Queue Entry Size 00:14:34.348 Max: 64 00:14:34.348 Min: 64 00:14:34.348 Completion Queue Entry Size 00:14:34.348 Max: 16 00:14:34.348 Min: 16 00:14:34.348 Number of Namespaces: 32 00:14:34.348 Compare Command: Supported 00:14:34.348 Write Uncorrectable Command: Not Supported 00:14:34.348 Dataset Management Command: Supported 00:14:34.348 Write Zeroes Command: Supported 00:14:34.348 Set Features Save Field: Not Supported 00:14:34.348 Reservations: Not Supported 00:14:34.348 Timestamp: Not Supported 00:14:34.348 Copy: Supported 00:14:34.348 Volatile Write Cache: Present 00:14:34.348 Atomic Write Unit (Normal): 1 00:14:34.348 Atomic Write Unit (PFail): 1 00:14:34.348 Atomic Compare & Write Unit: 1 00:14:34.348 Fused Compare & Write: Supported 00:14:34.348 Scatter-Gather List 00:14:34.348 SGL Command Set: Supported (Dword aligned) 00:14:34.348 SGL Keyed: Not Supported 00:14:34.348 SGL Bit Bucket Descriptor: Not Supported 00:14:34.348 SGL Metadata Pointer: Not Supported 00:14:34.348 Oversized SGL: Not Supported 00:14:34.348 SGL Metadata Address: Not Supported 00:14:34.348 SGL Offset: Not Supported 00:14:34.348 Transport SGL Data Block: Not Supported 00:14:34.348 Replay Protected Memory Block: Not Supported 00:14:34.348 00:14:34.348 Firmware Slot Information 00:14:34.348 ========================= 00:14:34.348 Active slot: 1 00:14:34.348 Slot 1 Firmware Revision: 25.01 00:14:34.348 00:14:34.348 00:14:34.348 Commands Supported and Effects 00:14:34.348 ============================== 00:14:34.348 Admin Commands 00:14:34.348 -------------- 00:14:34.348 Get Log Page (02h): Supported 00:14:34.348 Identify (06h): Supported 00:14:34.348 Abort (08h): Supported 00:14:34.348 Set Features (09h): Supported 00:14:34.348 Get Features (0Ah): Supported 00:14:34.348 Asynchronous Event Request (0Ch): Supported 00:14:34.348 Keep Alive (18h): Supported 00:14:34.348 I/O Commands 00:14:34.348 ------------ 00:14:34.348 Flush (00h): Supported LBA-Change 00:14:34.348 Write (01h): Supported LBA-Change 00:14:34.348 Read (02h): Supported 00:14:34.348 Compare (05h): Supported 00:14:34.348 Write Zeroes (08h): Supported LBA-Change 00:14:34.348 Dataset Management (09h): Supported LBA-Change 00:14:34.348 Copy (19h): Supported LBA-Change 00:14:34.348 00:14:34.348 Error Log 00:14:34.348 ========= 00:14:34.348 00:14:34.348 Arbitration 00:14:34.348 =========== 00:14:34.348 Arbitration Burst: 1 00:14:34.348 00:14:34.348 Power Management 00:14:34.348 ================ 00:14:34.348 Number of Power States: 1 00:14:34.348 Current Power State: Power State #0 00:14:34.348 Power State #0: 00:14:34.348 Max Power: 0.00 W 00:14:34.348 Non-Operational State: Operational 00:14:34.348 Entry Latency: Not Reported 00:14:34.348 Exit Latency: Not Reported 00:14:34.348 Relative Read Throughput: 0 00:14:34.348 Relative Read Latency: 0 00:14:34.348 Relative Write Throughput: 0 00:14:34.348 Relative Write Latency: 0 00:14:34.348 Idle Power: Not Reported 00:14:34.348 Active Power: Not Reported 00:14:34.348 Non-Operational Permissive Mode: Not Supported 00:14:34.348 00:14:34.348 Health Information 00:14:34.348 ================== 00:14:34.348 Critical Warnings: 00:14:34.348 Available Spare Space: OK 00:14:34.348 Temperature: OK 00:14:34.348 Device Reliability: OK 00:14:34.348 Read Only: No 00:14:34.348 Volatile Memory Backup: OK 00:14:34.348 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:34.348 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:34.348 Available Spare: 0% 00:14:34.348 Available Sp[2024-12-09 15:47:29.354337] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:34.348 [2024-12-09 15:47:29.362223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:34.348 [2024-12-09 15:47:29.362253] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:34.348 [2024-12-09 15:47:29.362262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.348 [2024-12-09 15:47:29.362267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.348 [2024-12-09 15:47:29.362273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.349 [2024-12-09 15:47:29.362279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.349 [2024-12-09 15:47:29.362333] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:34.349 [2024-12-09 15:47:29.362343] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:34.349 [2024-12-09 15:47:29.363331] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:34.349 [2024-12-09 15:47:29.363375] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:34.349 [2024-12-09 15:47:29.363381] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:34.349 [2024-12-09 15:47:29.364338] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:34.349 [2024-12-09 15:47:29.364349] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:34.349 [2024-12-09 15:47:29.364400] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:34.349 [2024-12-09 15:47:29.365355] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:34.349 are Threshold: 0% 00:14:34.349 Life Percentage Used: 0% 00:14:34.349 Data Units Read: 0 00:14:34.349 Data Units Written: 0 00:14:34.349 Host Read Commands: 0 00:14:34.349 Host Write Commands: 0 00:14:34.349 Controller Busy Time: 0 minutes 00:14:34.349 Power Cycles: 0 00:14:34.349 Power On Hours: 0 hours 00:14:34.349 Unsafe Shutdowns: 0 00:14:34.349 Unrecoverable Media Errors: 0 00:14:34.349 Lifetime Error Log Entries: 0 00:14:34.349 Warning Temperature Time: 0 minutes 00:14:34.349 Critical Temperature Time: 0 minutes 00:14:34.349 00:14:34.349 Number of Queues 00:14:34.349 ================ 00:14:34.349 Number of I/O Submission Queues: 127 00:14:34.349 Number of I/O Completion Queues: 127 00:14:34.349 00:14:34.349 Active Namespaces 00:14:34.349 ================= 00:14:34.349 Namespace ID:1 00:14:34.349 Error Recovery Timeout: Unlimited 00:14:34.349 Command Set Identifier: NVM (00h) 00:14:34.349 Deallocate: Supported 00:14:34.349 Deallocated/Unwritten Error: Not Supported 00:14:34.349 Deallocated Read Value: Unknown 00:14:34.349 Deallocate in Write Zeroes: Not Supported 00:14:34.349 Deallocated Guard Field: 0xFFFF 00:14:34.349 Flush: Supported 00:14:34.349 Reservation: Supported 00:14:34.349 Namespace Sharing Capabilities: Multiple Controllers 00:14:34.349 Size (in LBAs): 131072 (0GiB) 00:14:34.349 Capacity (in LBAs): 131072 (0GiB) 00:14:34.349 Utilization (in LBAs): 131072 (0GiB) 00:14:34.349 NGUID: CCBEC4024A5546E2AFF412795B846D06 00:14:34.349 UUID: ccbec402-4a55-46e2-aff4-12795b846d06 00:14:34.349 Thin Provisioning: Not Supported 00:14:34.349 Per-NS Atomic Units: Yes 00:14:34.349 Atomic Boundary Size (Normal): 0 00:14:34.349 Atomic Boundary Size (PFail): 0 00:14:34.349 Atomic Boundary Offset: 0 00:14:34.349 Maximum Single Source Range Length: 65535 00:14:34.349 Maximum Copy Length: 65535 00:14:34.349 Maximum Source Range Count: 1 00:14:34.349 NGUID/EUI64 Never Reused: No 00:14:34.349 Namespace Write Protected: No 00:14:34.349 Number of LBA Formats: 1 00:14:34.349 Current LBA Format: LBA Format #00 00:14:34.349 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:34.349 00:14:34.349 15:47:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:34.608 [2024-12-09 15:47:29.598628] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:39.879 Initializing NVMe Controllers 00:14:39.879 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:39.879 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:39.879 Initialization complete. Launching workers. 00:14:39.879 ======================================================== 00:14:39.879 Latency(us) 00:14:39.879 Device Information : IOPS MiB/s Average min max 00:14:39.879 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39927.90 155.97 3205.61 964.73 7599.01 00:14:39.879 ======================================================== 00:14:39.879 Total : 39927.90 155.97 3205.61 964.73 7599.01 00:14:39.879 00:14:39.879 [2024-12-09 15:47:34.699479] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:39.879 15:47:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:39.879 [2024-12-09 15:47:34.931178] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:45.152 Initializing NVMe Controllers 00:14:45.152 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:45.152 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:45.152 Initialization complete. Launching workers. 00:14:45.152 ======================================================== 00:14:45.152 Latency(us) 00:14:45.152 Device Information : IOPS MiB/s Average min max 00:14:45.152 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39916.64 155.92 3206.52 977.45 10342.92 00:14:45.152 ======================================================== 00:14:45.152 Total : 39916.64 155.92 3206.52 977.45 10342.92 00:14:45.152 00:14:45.152 [2024-12-09 15:47:39.951138] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:45.152 15:47:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:45.152 [2024-12-09 15:47:40.165588] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:50.427 [2024-12-09 15:47:45.310316] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:50.427 Initializing NVMe Controllers 00:14:50.427 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:50.427 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:50.427 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:50.427 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:50.427 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:50.427 Initialization complete. Launching workers. 00:14:50.427 Starting thread on core 2 00:14:50.427 Starting thread on core 3 00:14:50.427 Starting thread on core 1 00:14:50.427 15:47:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:50.427 [2024-12-09 15:47:45.604627] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:54.619 [2024-12-09 15:47:49.198406] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:54.619 Initializing NVMe Controllers 00:14:54.619 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:54.619 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:54.619 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:54.619 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:54.619 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:54.619 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:54.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:54.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:54.619 Initialization complete. Launching workers. 00:14:54.619 Starting thread on core 1 with urgent priority queue 00:14:54.619 Starting thread on core 2 with urgent priority queue 00:14:54.619 Starting thread on core 3 with urgent priority queue 00:14:54.619 Starting thread on core 0 with urgent priority queue 00:14:54.619 SPDK bdev Controller (SPDK2 ) core 0: 6590.33 IO/s 15.17 secs/100000 ios 00:14:54.619 SPDK bdev Controller (SPDK2 ) core 1: 6440.67 IO/s 15.53 secs/100000 ios 00:14:54.619 SPDK bdev Controller (SPDK2 ) core 2: 6035.33 IO/s 16.57 secs/100000 ios 00:14:54.619 SPDK bdev Controller (SPDK2 ) core 3: 7614.67 IO/s 13.13 secs/100000 ios 00:14:54.619 ======================================================== 00:14:54.619 00:14:54.619 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:54.619 [2024-12-09 15:47:49.484636] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:54.619 Initializing NVMe Controllers 00:14:54.619 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:54.619 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:54.619 Namespace ID: 1 size: 0GB 00:14:54.619 Initialization complete. 00:14:54.619 INFO: using host memory buffer for IO 00:14:54.619 Hello world! 00:14:54.619 [2024-12-09 15:47:49.496713] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:54.619 15:47:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:54.619 [2024-12-09 15:47:49.781636] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:55.999 Initializing NVMe Controllers 00:14:55.999 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:55.999 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:55.999 Initialization complete. Launching workers. 00:14:55.999 submit (in ns) avg, min, max = 6049.9, 3129.5, 4000412.4 00:14:55.999 complete (in ns) avg, min, max = 19184.7, 1714.3, 4121409.5 00:14:55.999 00:14:55.999 Submit histogram 00:14:55.999 ================ 00:14:55.999 Range in us Cumulative Count 00:14:55.999 3.124 - 3.139: 0.0060% ( 1) 00:14:55.999 3.185 - 3.200: 0.0181% ( 2) 00:14:55.999 3.200 - 3.215: 0.3380% ( 53) 00:14:55.999 3.215 - 3.230: 1.8592% ( 252) 00:14:55.999 3.230 - 3.246: 5.2940% ( 569) 00:14:55.999 3.246 - 3.261: 9.5316% ( 702) 00:14:55.999 3.261 - 3.276: 14.4151% ( 809) 00:14:55.999 3.276 - 3.291: 20.8318% ( 1063) 00:14:55.999 3.291 - 3.307: 27.0554% ( 1031) 00:14:55.999 3.307 - 3.322: 32.9591% ( 978) 00:14:55.999 3.322 - 3.337: 39.2008% ( 1034) 00:14:55.999 3.337 - 3.352: 45.0139% ( 963) 00:14:55.999 3.352 - 3.368: 50.6459% ( 933) 00:14:55.999 3.368 - 3.383: 56.3866% ( 951) 00:14:55.999 3.383 - 3.398: 63.2380% ( 1135) 00:14:55.999 3.398 - 3.413: 69.0511% ( 963) 00:14:55.999 3.413 - 3.429: 74.2726% ( 865) 00:14:55.999 3.429 - 3.444: 78.9629% ( 777) 00:14:55.999 3.444 - 3.459: 82.2649% ( 547) 00:14:55.999 3.459 - 3.474: 84.9632% ( 447) 00:14:55.999 3.474 - 3.490: 86.2248% ( 209) 00:14:55.999 3.490 - 3.505: 87.0095% ( 130) 00:14:55.999 3.505 - 3.520: 87.7037% ( 115) 00:14:55.999 3.520 - 3.535: 88.2108% ( 84) 00:14:55.999 3.535 - 3.550: 88.7843% ( 95) 00:14:55.999 3.550 - 3.566: 89.4724% ( 114) 00:14:55.999 3.566 - 3.581: 90.4141% ( 156) 00:14:55.999 3.581 - 3.596: 91.1988% ( 130) 00:14:55.999 3.596 - 3.611: 92.1466% ( 157) 00:14:55.999 3.611 - 3.627: 93.0219% ( 145) 00:14:55.999 3.627 - 3.642: 94.0662% ( 173) 00:14:56.000 3.642 - 3.657: 94.9475% ( 146) 00:14:56.000 3.657 - 3.672: 95.8348% ( 147) 00:14:56.000 3.672 - 3.688: 96.6015% ( 127) 00:14:56.000 3.688 - 3.703: 97.3983% ( 132) 00:14:56.000 3.703 - 3.718: 97.9235% ( 87) 00:14:56.000 3.718 - 3.733: 98.4788% ( 92) 00:14:56.000 3.733 - 3.749: 98.8591% ( 63) 00:14:56.000 3.749 - 3.764: 99.0583% ( 33) 00:14:56.000 3.764 - 3.779: 99.2454% ( 31) 00:14:56.000 3.779 - 3.794: 99.3118% ( 11) 00:14:56.000 3.794 - 3.810: 99.4145% ( 17) 00:14:56.000 3.810 - 3.825: 99.5110% ( 16) 00:14:56.000 3.825 - 3.840: 99.5654% ( 9) 00:14:56.000 3.840 - 3.855: 99.5835% ( 3) 00:14:56.000 3.855 - 3.870: 99.6016% ( 3) 00:14:56.000 3.962 - 3.992: 99.6076% ( 1) 00:14:56.000 3.992 - 4.023: 99.6137% ( 1) 00:14:56.000 4.023 - 4.053: 99.6197% ( 1) 00:14:56.000 4.084 - 4.114: 99.6257% ( 1) 00:14:56.000 4.114 - 4.145: 99.6378% ( 2) 00:14:56.000 5.211 - 5.242: 99.6438% ( 1) 00:14:56.000 5.394 - 5.425: 99.6499% ( 1) 00:14:56.000 5.486 - 5.516: 99.6559% ( 1) 00:14:56.000 5.516 - 5.547: 99.6680% ( 2) 00:14:56.000 5.882 - 5.912: 99.6740% ( 1) 00:14:56.000 5.973 - 6.004: 99.6861% ( 2) 00:14:56.000 6.187 - 6.217: 99.6921% ( 1) 00:14:56.000 6.278 - 6.309: 99.6982% ( 1) 00:14:56.000 6.339 - 6.370: 99.7042% ( 1) 00:14:56.000 6.370 - 6.400: 99.7102% ( 1) 00:14:56.000 6.400 - 6.430: 99.7163% ( 1) 00:14:56.000 6.430 - 6.461: 99.7223% ( 1) 00:14:56.000 6.461 - 6.491: 99.7284% ( 1) 00:14:56.000 6.552 - 6.583: 99.7344% ( 1) 00:14:56.000 6.583 - 6.613: 99.7465% ( 2) 00:14:56.000 6.796 - 6.827: 99.7525% ( 1) 00:14:56.000 6.918 - 6.949: 99.7585% ( 1) 00:14:56.000 6.949 - 6.979: 99.7646% ( 1) 00:14:56.000 6.979 - 7.010: 99.7706% ( 1) 00:14:56.000 7.010 - 7.040: 99.7767% ( 1) 00:14:56.000 7.162 - 7.192: 99.7887% ( 2) 00:14:56.000 7.223 - 7.253: 99.7948% ( 1) 00:14:56.000 7.253 - 7.284: 99.8008% ( 1) 00:14:56.000 7.314 - 7.345: 99.8068% ( 1) 00:14:56.000 7.375 - 7.406: 99.8129% ( 1) 00:14:56.000 7.436 - 7.467: 99.8189% ( 1) 00:14:56.000 7.528 - 7.558: 99.8249% ( 1) 00:14:56.000 7.558 - 7.589: 99.8310% ( 1) 00:14:56.000 [2024-12-09 15:47:50.877228] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:56.000 7.589 - 7.619: 99.8370% ( 1) 00:14:56.000 7.650 - 7.680: 99.8431% ( 1) 00:14:56.000 7.680 - 7.710: 99.8491% ( 1) 00:14:56.000 7.802 - 7.863: 99.8551% ( 1) 00:14:56.000 8.168 - 8.229: 99.8732% ( 3) 00:14:56.000 8.290 - 8.350: 99.8853% ( 2) 00:14:56.000 8.350 - 8.411: 99.8974% ( 2) 00:14:56.000 8.411 - 8.472: 99.9034% ( 1) 00:14:56.000 8.533 - 8.594: 99.9155% ( 2) 00:14:56.000 8.960 - 9.021: 99.9215% ( 1) 00:14:56.000 9.082 - 9.143: 99.9276% ( 1) 00:14:56.000 15.543 - 15.604: 99.9336% ( 1) 00:14:56.000 3994.575 - 4025.783: 100.0000% ( 11) 00:14:56.000 00:14:56.000 Complete histogram 00:14:56.000 ================== 00:14:56.000 Range in us Cumulative Count 00:14:56.000 1.714 - 1.722: 0.0121% ( 2) 00:14:56.000 1.722 - 1.730: 0.0302% ( 3) 00:14:56.000 1.730 - 1.737: 0.0845% ( 9) 00:14:56.000 1.737 - 1.745: 0.0966% ( 2) 00:14:56.000 1.745 - 1.752: 0.1026% ( 1) 00:14:56.000 1.752 - 1.760: 0.1147% ( 2) 00:14:56.000 1.760 - 1.768: 1.2918% ( 195) 00:14:56.000 1.768 - 1.775: 11.2640% ( 1652) 00:14:56.000 1.775 - 1.783: 33.3273% ( 3655) 00:14:56.000 1.783 - 1.790: 49.2937% ( 2645) 00:14:56.000 1.790 - 1.798: 55.4268% ( 1016) 00:14:56.000 1.798 - 1.806: 58.3001% ( 476) 00:14:56.000 1.806 - 1.813: 60.9079% ( 432) 00:14:56.000 1.813 - 1.821: 67.9826% ( 1172) 00:14:56.000 1.821 - 1.829: 79.7115% ( 1943) 00:14:56.000 1.829 - 1.836: 88.9412% ( 1529) 00:14:56.000 1.836 - 1.844: 93.0943% ( 688) 00:14:56.000 1.844 - 1.851: 95.1225% ( 336) 00:14:56.000 1.851 - 1.859: 96.3902% ( 210) 00:14:56.000 1.859 - 1.867: 97.1448% ( 125) 00:14:56.000 1.867 - 1.874: 97.5251% ( 63) 00:14:56.000 1.874 - 1.882: 97.7182% ( 32) 00:14:56.000 1.882 - 1.890: 97.9838% ( 44) 00:14:56.000 1.890 - 1.897: 98.2494% ( 44) 00:14:56.000 1.897 - 1.905: 98.5090% ( 43) 00:14:56.000 1.905 - 1.912: 98.7203% ( 35) 00:14:56.000 1.912 - 1.920: 98.8651% ( 24) 00:14:56.000 1.920 - 1.928: 98.9134% ( 8) 00:14:56.000 1.928 - 1.935: 98.9678% ( 9) 00:14:56.000 1.935 - 1.943: 98.9859% ( 3) 00:14:56.000 1.943 - 1.950: 98.9919% ( 1) 00:14:56.000 1.950 - 1.966: 99.0221% ( 5) 00:14:56.000 1.996 - 2.011: 99.0764% ( 9) 00:14:56.000 2.011 - 2.027: 99.2213% ( 24) 00:14:56.000 2.042 - 2.057: 99.2394% ( 3) 00:14:56.000 2.057 - 2.072: 99.3179% ( 13) 00:14:56.000 2.072 - 2.088: 99.3360% ( 3) 00:14:56.000 2.088 - 2.103: 99.3420% ( 1) 00:14:56.000 2.103 - 2.118: 99.3541% ( 2) 00:14:56.000 2.118 - 2.133: 99.3601% ( 1) 00:14:56.000 2.133 - 2.149: 99.3722% ( 2) 00:14:56.000 2.164 - 2.179: 99.3782% ( 1) 00:14:56.000 3.886 - 3.901: 99.3843% ( 1) 00:14:56.000 4.297 - 4.328: 99.3903% ( 1) 00:14:56.000 4.571 - 4.602: 99.3964% ( 1) 00:14:56.000 4.632 - 4.663: 99.4024% ( 1) 00:14:56.000 4.815 - 4.846: 99.4084% ( 1) 00:14:56.000 4.846 - 4.876: 99.4205% ( 2) 00:14:56.000 4.998 - 5.029: 99.4265% ( 1) 00:14:56.000 5.120 - 5.150: 99.4326% ( 1) 00:14:56.000 5.242 - 5.272: 99.4507% ( 3) 00:14:56.000 5.425 - 5.455: 99.4567% ( 1) 00:14:56.000 5.516 - 5.547: 99.4628% ( 1) 00:14:56.000 5.577 - 5.608: 99.4688% ( 1) 00:14:56.000 6.339 - 6.370: 99.4748% ( 1) 00:14:56.000 6.583 - 6.613: 99.4809% ( 1) 00:14:56.000 7.010 - 7.040: 99.4869% ( 1) 00:14:56.000 7.040 - 7.070: 99.4929% ( 1) 00:14:56.000 7.192 - 7.223: 99.4990% ( 1) 00:14:56.000 7.223 - 7.253: 99.5050% ( 1) 00:14:56.000 8.411 - 8.472: 99.5110% ( 1) 00:14:56.000 8.472 - 8.533: 99.5171% ( 1) 00:14:56.000 8.777 - 8.838: 99.5231% ( 1) 00:14:56.000 9.509 - 9.570: 99.5292% ( 1) 00:14:56.000 9.874 - 9.935: 99.5352% ( 1) 00:14:56.000 10.057 - 10.118: 99.5412% ( 1) 00:14:56.000 10.301 - 10.362: 99.5473% ( 1) 00:14:56.000 12.251 - 12.312: 99.5533% ( 1) 00:14:56.000 32.914 - 33.158: 99.5593% ( 1) 00:14:56.000 38.766 - 39.010: 99.5654% ( 1) 00:14:56.000 3994.575 - 4025.783: 99.9940% ( 71) 00:14:56.000 4119.406 - 4150.613: 100.0000% ( 1) 00:14:56.000 00:14:56.000 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:56.000 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:56.000 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:56.000 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:56.000 15:47:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:56.000 [ 00:14:56.000 { 00:14:56.000 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:56.000 "subtype": "Discovery", 00:14:56.000 "listen_addresses": [], 00:14:56.000 "allow_any_host": true, 00:14:56.000 "hosts": [] 00:14:56.000 }, 00:14:56.000 { 00:14:56.000 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:56.000 "subtype": "NVMe", 00:14:56.000 "listen_addresses": [ 00:14:56.000 { 00:14:56.000 "trtype": "VFIOUSER", 00:14:56.000 "adrfam": "IPv4", 00:14:56.000 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:56.000 "trsvcid": "0" 00:14:56.000 } 00:14:56.000 ], 00:14:56.000 "allow_any_host": true, 00:14:56.000 "hosts": [], 00:14:56.000 "serial_number": "SPDK1", 00:14:56.000 "model_number": "SPDK bdev Controller", 00:14:56.000 "max_namespaces": 32, 00:14:56.000 "min_cntlid": 1, 00:14:56.000 "max_cntlid": 65519, 00:14:56.000 "namespaces": [ 00:14:56.000 { 00:14:56.000 "nsid": 1, 00:14:56.001 "bdev_name": "Malloc1", 00:14:56.001 "name": "Malloc1", 00:14:56.001 "nguid": "C185E26CB798401B8F5BD4FE04B17E55", 00:14:56.001 "uuid": "c185e26c-b798-401b-8f5b-d4fe04b17e55" 00:14:56.001 }, 00:14:56.001 { 00:14:56.001 "nsid": 2, 00:14:56.001 "bdev_name": "Malloc3", 00:14:56.001 "name": "Malloc3", 00:14:56.001 "nguid": "7C492AAD0E284A379B24CCEF97D61091", 00:14:56.001 "uuid": "7c492aad-0e28-4a37-9b24-ccef97d61091" 00:14:56.001 } 00:14:56.001 ] 00:14:56.001 }, 00:14:56.001 { 00:14:56.001 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:56.001 "subtype": "NVMe", 00:14:56.001 "listen_addresses": [ 00:14:56.001 { 00:14:56.001 "trtype": "VFIOUSER", 00:14:56.001 "adrfam": "IPv4", 00:14:56.001 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:56.001 "trsvcid": "0" 00:14:56.001 } 00:14:56.001 ], 00:14:56.001 "allow_any_host": true, 00:14:56.001 "hosts": [], 00:14:56.001 "serial_number": "SPDK2", 00:14:56.001 "model_number": "SPDK bdev Controller", 00:14:56.001 "max_namespaces": 32, 00:14:56.001 "min_cntlid": 1, 00:14:56.001 "max_cntlid": 65519, 00:14:56.001 "namespaces": [ 00:14:56.001 { 00:14:56.001 "nsid": 1, 00:14:56.001 "bdev_name": "Malloc2", 00:14:56.001 "name": "Malloc2", 00:14:56.001 "nguid": "CCBEC4024A5546E2AFF412795B846D06", 00:14:56.001 "uuid": "ccbec402-4a55-46e2-aff4-12795b846d06" 00:14:56.001 } 00:14:56.001 ] 00:14:56.001 } 00:14:56.001 ] 00:14:56.001 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:56.001 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1968346 00:14:56.001 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:56.001 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:56.001 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:56.001 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:56.001 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:56.001 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:56.001 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:56.001 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:56.307 [2024-12-09 15:47:51.291687] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:56.307 Malloc4 00:14:56.307 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:56.307 [2024-12-09 15:47:51.518403] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:56.616 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:56.616 Asynchronous Event Request test 00:14:56.616 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:56.616 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:56.616 Registering asynchronous event callbacks... 00:14:56.616 Starting namespace attribute notice tests for all controllers... 00:14:56.616 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:56.616 aer_cb - Changed Namespace 00:14:56.616 Cleaning up... 00:14:56.616 [ 00:14:56.616 { 00:14:56.616 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:56.616 "subtype": "Discovery", 00:14:56.616 "listen_addresses": [], 00:14:56.616 "allow_any_host": true, 00:14:56.616 "hosts": [] 00:14:56.616 }, 00:14:56.616 { 00:14:56.616 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:56.616 "subtype": "NVMe", 00:14:56.616 "listen_addresses": [ 00:14:56.616 { 00:14:56.616 "trtype": "VFIOUSER", 00:14:56.616 "adrfam": "IPv4", 00:14:56.616 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:56.616 "trsvcid": "0" 00:14:56.616 } 00:14:56.616 ], 00:14:56.616 "allow_any_host": true, 00:14:56.616 "hosts": [], 00:14:56.616 "serial_number": "SPDK1", 00:14:56.616 "model_number": "SPDK bdev Controller", 00:14:56.616 "max_namespaces": 32, 00:14:56.616 "min_cntlid": 1, 00:14:56.616 "max_cntlid": 65519, 00:14:56.616 "namespaces": [ 00:14:56.616 { 00:14:56.616 "nsid": 1, 00:14:56.616 "bdev_name": "Malloc1", 00:14:56.616 "name": "Malloc1", 00:14:56.616 "nguid": "C185E26CB798401B8F5BD4FE04B17E55", 00:14:56.616 "uuid": "c185e26c-b798-401b-8f5b-d4fe04b17e55" 00:14:56.616 }, 00:14:56.616 { 00:14:56.616 "nsid": 2, 00:14:56.616 "bdev_name": "Malloc3", 00:14:56.616 "name": "Malloc3", 00:14:56.616 "nguid": "7C492AAD0E284A379B24CCEF97D61091", 00:14:56.616 "uuid": "7c492aad-0e28-4a37-9b24-ccef97d61091" 00:14:56.616 } 00:14:56.616 ] 00:14:56.616 }, 00:14:56.616 { 00:14:56.616 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:56.616 "subtype": "NVMe", 00:14:56.616 "listen_addresses": [ 00:14:56.616 { 00:14:56.616 "trtype": "VFIOUSER", 00:14:56.616 "adrfam": "IPv4", 00:14:56.616 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:56.616 "trsvcid": "0" 00:14:56.616 } 00:14:56.616 ], 00:14:56.616 "allow_any_host": true, 00:14:56.616 "hosts": [], 00:14:56.616 "serial_number": "SPDK2", 00:14:56.616 "model_number": "SPDK bdev Controller", 00:14:56.616 "max_namespaces": 32, 00:14:56.616 "min_cntlid": 1, 00:14:56.616 "max_cntlid": 65519, 00:14:56.616 "namespaces": [ 00:14:56.616 { 00:14:56.616 "nsid": 1, 00:14:56.616 "bdev_name": "Malloc2", 00:14:56.616 "name": "Malloc2", 00:14:56.616 "nguid": "CCBEC4024A5546E2AFF412795B846D06", 00:14:56.616 "uuid": "ccbec402-4a55-46e2-aff4-12795b846d06" 00:14:56.616 }, 00:14:56.616 { 00:14:56.616 "nsid": 2, 00:14:56.616 "bdev_name": "Malloc4", 00:14:56.616 "name": "Malloc4", 00:14:56.616 "nguid": "CDA7E1225A8A4108B9B3CFCE963FBCF6", 00:14:56.616 "uuid": "cda7e122-5a8a-4108-b9b3-cfce963fbcf6" 00:14:56.616 } 00:14:56.616 ] 00:14:56.616 } 00:14:56.616 ] 00:14:56.616 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1968346 00:14:56.616 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:56.616 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1960615 00:14:56.616 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1960615 ']' 00:14:56.616 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1960615 00:14:56.616 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:56.616 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.616 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1960615 00:14:56.616 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:56.616 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:56.616 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1960615' 00:14:56.616 killing process with pid 1960615 00:14:56.616 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1960615 00:14:56.616 15:47:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1960615 00:14:56.876 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:56.876 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:56.876 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:56.876 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:56.876 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:56.876 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1968372 00:14:56.876 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1968372' 00:14:56.876 Process pid: 1968372 00:14:56.876 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:56.876 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:56.876 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1968372 00:14:56.876 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1968372 ']' 00:14:56.876 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.876 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.876 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.876 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.876 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:56.876 [2024-12-09 15:47:52.089041] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:56.876 [2024-12-09 15:47:52.089906] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:14:56.876 [2024-12-09 15:47:52.089943] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.135 [2024-12-09 15:47:52.163856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.135 [2024-12-09 15:47:52.201374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.135 [2024-12-09 15:47:52.201412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.135 [2024-12-09 15:47:52.201420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.135 [2024-12-09 15:47:52.201426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.135 [2024-12-09 15:47:52.201430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.135 [2024-12-09 15:47:52.202858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.135 [2024-12-09 15:47:52.202967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.135 [2024-12-09 15:47:52.203071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.135 [2024-12-09 15:47:52.203072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.135 [2024-12-09 15:47:52.272265] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:57.135 [2024-12-09 15:47:52.272762] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:57.135 [2024-12-09 15:47:52.273189] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:57.135 [2024-12-09 15:47:52.273348] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:57.135 [2024-12-09 15:47:52.273413] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:57.135 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.135 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:57.135 15:47:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:58.514 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:58.514 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:58.514 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:58.514 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:58.514 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:58.514 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:58.514 Malloc1 00:14:58.773 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:58.773 15:47:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:59.032 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:59.292 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.292 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:59.292 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:59.292 Malloc2 00:14:59.551 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:59.551 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:59.810 15:47:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:00.077 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:00.077 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1968372 00:15:00.077 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1968372 ']' 00:15:00.077 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1968372 00:15:00.077 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:00.077 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.077 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1968372 00:15:00.077 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.077 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.077 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1968372' 00:15:00.077 killing process with pid 1968372 00:15:00.077 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1968372 00:15:00.077 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1968372 00:15:00.338 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:00.338 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:00.339 00:15:00.339 real 0m51.373s 00:15:00.339 user 3m18.840s 00:15:00.339 sys 0m3.221s 00:15:00.339 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.339 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:00.339 ************************************ 00:15:00.339 END TEST nvmf_vfio_user 00:15:00.339 ************************************ 00:15:00.339 15:47:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:00.339 15:47:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:00.339 15:47:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.339 15:47:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:00.339 ************************************ 00:15:00.339 START TEST nvmf_vfio_user_nvme_compliance 00:15:00.339 ************************************ 00:15:00.339 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:00.339 * Looking for test storage... 00:15:00.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:00.339 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:00.339 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:15:00.339 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:00.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.599 --rc genhtml_branch_coverage=1 00:15:00.599 --rc genhtml_function_coverage=1 00:15:00.599 --rc genhtml_legend=1 00:15:00.599 --rc geninfo_all_blocks=1 00:15:00.599 --rc geninfo_unexecuted_blocks=1 00:15:00.599 00:15:00.599 ' 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:00.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.599 --rc genhtml_branch_coverage=1 00:15:00.599 --rc genhtml_function_coverage=1 00:15:00.599 --rc genhtml_legend=1 00:15:00.599 --rc geninfo_all_blocks=1 00:15:00.599 --rc geninfo_unexecuted_blocks=1 00:15:00.599 00:15:00.599 ' 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:00.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.599 --rc genhtml_branch_coverage=1 00:15:00.599 --rc genhtml_function_coverage=1 00:15:00.599 --rc genhtml_legend=1 00:15:00.599 --rc geninfo_all_blocks=1 00:15:00.599 --rc geninfo_unexecuted_blocks=1 00:15:00.599 00:15:00.599 ' 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:00.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.599 --rc genhtml_branch_coverage=1 00:15:00.599 --rc genhtml_function_coverage=1 00:15:00.599 --rc genhtml_legend=1 00:15:00.599 --rc geninfo_all_blocks=1 00:15:00.599 --rc geninfo_unexecuted_blocks=1 00:15:00.599 00:15:00.599 ' 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.599 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:00.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1969129 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1969129' 00:15:00.600 Process pid: 1969129 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1969129 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1969129 ']' 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.600 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:00.600 [2024-12-09 15:47:55.713943] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:15:00.600 [2024-12-09 15:47:55.713991] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.600 [2024-12-09 15:47:55.787609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:00.859 [2024-12-09 15:47:55.828039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.859 [2024-12-09 15:47:55.828074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.859 [2024-12-09 15:47:55.828081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.859 [2024-12-09 15:47:55.828087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.859 [2024-12-09 15:47:55.828092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.859 [2024-12-09 15:47:55.829465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.859 [2024-12-09 15:47:55.829575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.859 [2024-12-09 15:47:55.829577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.859 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.859 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:00.859 15:47:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:01.796 malloc0 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.796 15:47:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:02.055 00:15:02.055 00:15:02.055 CUnit - A unit testing framework for C - Version 2.1-3 00:15:02.055 http://cunit.sourceforge.net/ 00:15:02.055 00:15:02.055 00:15:02.055 Suite: nvme_compliance 00:15:02.055 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 15:47:57.159685] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.055 [2024-12-09 15:47:57.161007] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:02.055 [2024-12-09 15:47:57.161022] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:02.055 [2024-12-09 15:47:57.161028] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:02.055 [2024-12-09 15:47:57.162707] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.055 passed 00:15:02.055 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 15:47:57.234233] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.055 [2024-12-09 15:47:57.240273] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.055 passed 00:15:02.314 Test: admin_identify_ns ...[2024-12-09 15:47:57.315813] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.314 [2024-12-09 15:47:57.375228] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:02.314 [2024-12-09 15:47:57.383228] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:02.314 [2024-12-09 15:47:57.404320] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.314 passed 00:15:02.314 Test: admin_get_features_mandatory_features ...[2024-12-09 15:47:57.482190] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.314 [2024-12-09 15:47:57.487223] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.314 passed 00:15:02.573 Test: admin_get_features_optional_features ...[2024-12-09 15:47:57.561715] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.573 [2024-12-09 15:47:57.564738] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.573 passed 00:15:02.573 Test: admin_set_features_number_of_queues ...[2024-12-09 15:47:57.643416] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.573 [2024-12-09 15:47:57.757301] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.573 passed 00:15:02.833 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 15:47:57.830446] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.833 [2024-12-09 15:47:57.833467] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.833 passed 00:15:02.833 Test: admin_get_log_page_with_lpo ...[2024-12-09 15:47:57.911081] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.833 [2024-12-09 15:47:57.979226] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:02.833 [2024-12-09 15:47:57.992303] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.833 passed 00:15:03.092 Test: fabric_property_get ...[2024-12-09 15:47:58.067942] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.092 [2024-12-09 15:47:58.069183] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:03.092 [2024-12-09 15:47:58.070963] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.092 passed 00:15:03.092 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 15:47:58.145464] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.092 [2024-12-09 15:47:58.146691] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:03.092 [2024-12-09 15:47:58.148481] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.092 passed 00:15:03.092 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 15:47:58.225100] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.092 [2024-12-09 15:47:58.308231] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:03.351 [2024-12-09 15:47:58.324222] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:03.351 [2024-12-09 15:47:58.329303] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.351 passed 00:15:03.351 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 15:47:58.404715] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.351 [2024-12-09 15:47:58.405946] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:03.351 [2024-12-09 15:47:58.407736] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.351 passed 00:15:03.351 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 15:47:58.486367] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.351 [2024-12-09 15:47:58.562230] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:03.610 [2024-12-09 15:47:58.586223] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:03.610 [2024-12-09 15:47:58.591305] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.610 passed 00:15:03.610 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 15:47:58.666932] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.610 [2024-12-09 15:47:58.668160] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:03.610 [2024-12-09 15:47:58.668183] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:03.610 [2024-12-09 15:47:58.671959] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.610 passed 00:15:03.610 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 15:47:58.748584] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.869 [2024-12-09 15:47:58.841223] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:03.869 [2024-12-09 15:47:58.849250] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:03.869 [2024-12-09 15:47:58.857223] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:03.869 [2024-12-09 15:47:58.865232] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:03.869 [2024-12-09 15:47:58.894300] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.869 passed 00:15:03.869 Test: admin_create_io_sq_verify_pc ...[2024-12-09 15:47:58.967908] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.869 [2024-12-09 15:47:58.983234] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:03.869 [2024-12-09 15:47:59.004083] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.869 passed 00:15:03.869 Test: admin_create_io_qp_max_qps ...[2024-12-09 15:47:59.081601] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.246 [2024-12-09 15:48:00.195228] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:05.505 [2024-12-09 15:48:00.572598] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.505 passed 00:15:05.505 Test: admin_create_io_sq_shared_cq ...[2024-12-09 15:48:00.650497] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.764 [2024-12-09 15:48:00.782234] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:05.764 [2024-12-09 15:48:00.819292] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:05.764 passed 00:15:05.764 00:15:05.764 Run Summary: Type Total Ran Passed Failed Inactive 00:15:05.764 suites 1 1 n/a 0 0 00:15:05.764 tests 18 18 18 0 0 00:15:05.764 asserts 360 360 360 0 n/a 00:15:05.764 00:15:05.764 Elapsed time = 1.507 seconds 00:15:05.764 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1969129 00:15:05.764 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1969129 ']' 00:15:05.764 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1969129 00:15:05.764 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:05.764 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.764 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1969129 00:15:05.764 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:05.764 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:05.764 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1969129' 00:15:05.764 killing process with pid 1969129 00:15:05.764 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1969129 00:15:05.764 15:48:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1969129 00:15:06.023 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:06.023 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:06.023 00:15:06.023 real 0m5.635s 00:15:06.023 user 0m15.771s 00:15:06.023 sys 0m0.493s 00:15:06.023 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.023 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:06.023 ************************************ 00:15:06.023 END TEST nvmf_vfio_user_nvme_compliance 00:15:06.023 ************************************ 00:15:06.023 15:48:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:06.023 15:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:06.023 15:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.023 15:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:06.023 ************************************ 00:15:06.023 START TEST nvmf_vfio_user_fuzz 00:15:06.023 ************************************ 00:15:06.023 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:06.023 * Looking for test storage... 00:15:06.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.023 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:06.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.283 --rc genhtml_branch_coverage=1 00:15:06.283 --rc genhtml_function_coverage=1 00:15:06.283 --rc genhtml_legend=1 00:15:06.283 --rc geninfo_all_blocks=1 00:15:06.283 --rc geninfo_unexecuted_blocks=1 00:15:06.283 00:15:06.283 ' 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:06.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.283 --rc genhtml_branch_coverage=1 00:15:06.283 --rc genhtml_function_coverage=1 00:15:06.283 --rc genhtml_legend=1 00:15:06.283 --rc geninfo_all_blocks=1 00:15:06.283 --rc geninfo_unexecuted_blocks=1 00:15:06.283 00:15:06.283 ' 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:06.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.283 --rc genhtml_branch_coverage=1 00:15:06.283 --rc genhtml_function_coverage=1 00:15:06.283 --rc genhtml_legend=1 00:15:06.283 --rc geninfo_all_blocks=1 00:15:06.283 --rc geninfo_unexecuted_blocks=1 00:15:06.283 00:15:06.283 ' 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:06.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.283 --rc genhtml_branch_coverage=1 00:15:06.283 --rc genhtml_function_coverage=1 00:15:06.283 --rc genhtml_legend=1 00:15:06.283 --rc geninfo_all_blocks=1 00:15:06.283 --rc geninfo_unexecuted_blocks=1 00:15:06.283 00:15:06.283 ' 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.283 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:06.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1970195 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1970195' 00:15:06.284 Process pid: 1970195 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1970195 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1970195 ']' 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.284 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:06.542 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.543 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:06.543 15:48:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:07.479 malloc0 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:07.479 15:48:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:39.567 Fuzzing completed. Shutting down the fuzz application 00:15:39.567 00:15:39.567 Dumping successful admin opcodes: 00:15:39.567 9, 10, 00:15:39.567 Dumping successful io opcodes: 00:15:39.567 0, 00:15:39.567 NS: 0x20000081ef00 I/O qp, Total commands completed: 989208, total successful commands: 3875, random_seed: 1257732160 00:15:39.567 NS: 0x20000081ef00 admin qp, Total commands completed: 241598, total successful commands: 56, random_seed: 2224225664 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1970195 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1970195 ']' 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1970195 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1970195 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1970195' 00:15:39.567 killing process with pid 1970195 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1970195 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1970195 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:39.567 00:15:39.567 real 0m32.208s 00:15:39.567 user 0m29.516s 00:15:39.567 sys 0m30.993s 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:39.567 ************************************ 00:15:39.567 END TEST nvmf_vfio_user_fuzz 00:15:39.567 ************************************ 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:39.567 ************************************ 00:15:39.567 START TEST nvmf_auth_target 00:15:39.567 ************************************ 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:39.567 * Looking for test storage... 00:15:39.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:39.567 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:39.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.568 --rc genhtml_branch_coverage=1 00:15:39.568 --rc genhtml_function_coverage=1 00:15:39.568 --rc genhtml_legend=1 00:15:39.568 --rc geninfo_all_blocks=1 00:15:39.568 --rc geninfo_unexecuted_blocks=1 00:15:39.568 00:15:39.568 ' 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:39.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.568 --rc genhtml_branch_coverage=1 00:15:39.568 --rc genhtml_function_coverage=1 00:15:39.568 --rc genhtml_legend=1 00:15:39.568 --rc geninfo_all_blocks=1 00:15:39.568 --rc geninfo_unexecuted_blocks=1 00:15:39.568 00:15:39.568 ' 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:39.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.568 --rc genhtml_branch_coverage=1 00:15:39.568 --rc genhtml_function_coverage=1 00:15:39.568 --rc genhtml_legend=1 00:15:39.568 --rc geninfo_all_blocks=1 00:15:39.568 --rc geninfo_unexecuted_blocks=1 00:15:39.568 00:15:39.568 ' 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:39.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.568 --rc genhtml_branch_coverage=1 00:15:39.568 --rc genhtml_function_coverage=1 00:15:39.568 --rc genhtml_legend=1 00:15:39.568 --rc geninfo_all_blocks=1 00:15:39.568 --rc geninfo_unexecuted_blocks=1 00:15:39.568 00:15:39.568 ' 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:39.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:39.568 15:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:44.844 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:44.844 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:44.845 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:44.845 Found net devices under 0000:af:00.0: cvl_0_0 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:44.845 Found net devices under 0000:af:00.1: cvl_0_1 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:44.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:15:44.845 00:15:44.845 --- 10.0.0.2 ping statistics --- 00:15:44.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.845 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:44.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:15:44.845 00:15:44.845 --- 10.0.0.1 ping statistics --- 00:15:44.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.845 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1978925 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1978925 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1978925 ']' 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1979049 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:44.845 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=92033603c6c3307ee1f25e4193f260655f1a1444dca0ddb8 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.YA2 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 92033603c6c3307ee1f25e4193f260655f1a1444dca0ddb8 0 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 92033603c6c3307ee1f25e4193f260655f1a1444dca0ddb8 0 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=92033603c6c3307ee1f25e4193f260655f1a1444dca0ddb8 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.YA2 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.YA2 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.YA2 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a4fb805f2959af878ce3dd498cae4aff62d6f7b73bf210bcb4104e37677fc7f2 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Vn6 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a4fb805f2959af878ce3dd498cae4aff62d6f7b73bf210bcb4104e37677fc7f2 3 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a4fb805f2959af878ce3dd498cae4aff62d6f7b73bf210bcb4104e37677fc7f2 3 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a4fb805f2959af878ce3dd498cae4aff62d6f7b73bf210bcb4104e37677fc7f2 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Vn6 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Vn6 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Vn6 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b50114755e335a6754f773dc79920751 00:15:44.846 15:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Wt0 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b50114755e335a6754f773dc79920751 1 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b50114755e335a6754f773dc79920751 1 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b50114755e335a6754f773dc79920751 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Wt0 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Wt0 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Wt0 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2094603810f7d3c3b5182387c6d89da3e68fde69eda50d9d 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.NE3 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2094603810f7d3c3b5182387c6d89da3e68fde69eda50d9d 2 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2094603810f7d3c3b5182387c6d89da3e68fde69eda50d9d 2 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2094603810f7d3c3b5182387c6d89da3e68fde69eda50d9d 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:44.846 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.NE3 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.NE3 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.NE3 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=285112903beea51fa48235fb607cc2b6d0c8850557df4668 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yxl 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 285112903beea51fa48235fb607cc2b6d0c8850557df4668 2 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 285112903beea51fa48235fb607cc2b6d0c8850557df4668 2 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=285112903beea51fa48235fb607cc2b6d0c8850557df4668 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yxl 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yxl 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.yxl 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0a6164ca95bbff153d98599b0836221b 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.55M 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0a6164ca95bbff153d98599b0836221b 1 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0a6164ca95bbff153d98599b0836221b 1 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0a6164ca95bbff153d98599b0836221b 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.55M 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.55M 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.55M 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f18ec7ea7e83ef0fad2e76c0adb5ff4b3da8638c4af9677a5545ec55f9ebb2da 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.lTR 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f18ec7ea7e83ef0fad2e76c0adb5ff4b3da8638c4af9677a5545ec55f9ebb2da 3 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f18ec7ea7e83ef0fad2e76c0adb5ff4b3da8638c4af9677a5545ec55f9ebb2da 3 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f18ec7ea7e83ef0fad2e76c0adb5ff4b3da8638c4af9677a5545ec55f9ebb2da 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.lTR 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.lTR 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.lTR 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1978925 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1978925 ']' 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.106 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.365 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.365 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:45.365 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1979049 /var/tmp/host.sock 00:15:45.365 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1979049 ']' 00:15:45.365 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:45.365 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.366 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:45.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:45.366 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.366 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.625 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.625 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:45.625 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:45.625 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.625 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.625 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.625 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:45.625 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.YA2 00:15:45.625 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.625 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.625 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.625 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.YA2 00:15:45.625 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.YA2 00:15:45.884 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Vn6 ]] 00:15:45.884 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vn6 00:15:45.884 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.884 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.884 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.884 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vn6 00:15:45.884 15:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vn6 00:15:46.143 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:46.143 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Wt0 00:15:46.143 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.143 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.143 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.143 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Wt0 00:15:46.143 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Wt0 00:15:46.143 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.NE3 ]] 00:15:46.143 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.NE3 00:15:46.143 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.143 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.143 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.143 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.NE3 00:15:46.143 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.NE3 00:15:46.402 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:46.402 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yxl 00:15:46.402 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.402 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.402 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.402 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.yxl 00:15:46.402 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.yxl 00:15:46.661 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.55M ]] 00:15:46.661 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.55M 00:15:46.661 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.661 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.661 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.661 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.55M 00:15:46.661 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.55M 00:15:46.920 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:46.920 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lTR 00:15:46.920 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.920 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.920 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.920 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.lTR 00:15:46.920 15:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.lTR 00:15:46.920 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:46.920 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:46.920 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.920 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.920 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:46.920 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.180 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:47.180 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.180 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.180 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:47.180 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:47.180 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.180 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.180 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.180 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.180 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.180 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.180 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.180 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.439 00:15:47.439 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.439 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.439 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.698 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.698 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.698 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.698 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.698 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.698 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.698 { 00:15:47.698 "cntlid": 1, 00:15:47.698 "qid": 0, 00:15:47.698 "state": "enabled", 00:15:47.698 "thread": "nvmf_tgt_poll_group_000", 00:15:47.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:47.698 "listen_address": { 00:15:47.698 "trtype": "TCP", 00:15:47.698 "adrfam": "IPv4", 00:15:47.698 "traddr": "10.0.0.2", 00:15:47.698 "trsvcid": "4420" 00:15:47.698 }, 00:15:47.698 "peer_address": { 00:15:47.698 "trtype": "TCP", 00:15:47.698 "adrfam": "IPv4", 00:15:47.698 "traddr": "10.0.0.1", 00:15:47.698 "trsvcid": "35570" 00:15:47.698 }, 00:15:47.698 "auth": { 00:15:47.698 "state": "completed", 00:15:47.698 "digest": "sha256", 00:15:47.698 "dhgroup": "null" 00:15:47.698 } 00:15:47.698 } 00:15:47.698 ]' 00:15:47.698 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.698 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.698 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.698 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:47.698 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.698 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.698 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.698 15:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.957 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:15:47.957 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:15:48.525 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.525 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:48.525 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.525 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.525 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.525 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.525 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:48.525 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:48.784 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:48.784 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.784 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.784 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:48.784 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:48.784 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.784 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.784 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.784 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.784 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.784 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.784 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.784 15:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:49.044 00:15:49.044 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.044 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.044 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.303 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.303 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.303 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.303 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.303 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.303 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.303 { 00:15:49.303 "cntlid": 3, 00:15:49.303 "qid": 0, 00:15:49.303 "state": "enabled", 00:15:49.303 "thread": "nvmf_tgt_poll_group_000", 00:15:49.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:49.303 "listen_address": { 00:15:49.303 "trtype": "TCP", 00:15:49.303 "adrfam": "IPv4", 00:15:49.303 "traddr": "10.0.0.2", 00:15:49.303 "trsvcid": "4420" 00:15:49.303 }, 00:15:49.303 "peer_address": { 00:15:49.303 "trtype": "TCP", 00:15:49.303 "adrfam": "IPv4", 00:15:49.303 "traddr": "10.0.0.1", 00:15:49.303 "trsvcid": "35602" 00:15:49.303 }, 00:15:49.303 "auth": { 00:15:49.303 "state": "completed", 00:15:49.303 "digest": "sha256", 00:15:49.303 "dhgroup": "null" 00:15:49.303 } 00:15:49.303 } 00:15:49.303 ]' 00:15:49.303 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.303 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.303 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.303 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:49.303 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.303 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.303 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.303 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.562 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:15:49.562 15:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:15:50.127 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.127 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:50.127 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.127 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.127 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.127 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.127 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:50.127 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:50.386 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:50.386 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.386 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.386 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:50.386 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:50.386 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.386 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.386 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.386 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.386 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.386 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.386 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.386 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.644 00:15:50.644 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.644 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.644 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.903 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.903 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.903 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.903 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.903 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.903 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.903 { 00:15:50.903 "cntlid": 5, 00:15:50.903 "qid": 0, 00:15:50.903 "state": "enabled", 00:15:50.903 "thread": "nvmf_tgt_poll_group_000", 00:15:50.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:50.903 "listen_address": { 00:15:50.903 "trtype": "TCP", 00:15:50.903 "adrfam": "IPv4", 00:15:50.903 "traddr": "10.0.0.2", 00:15:50.903 "trsvcid": "4420" 00:15:50.903 }, 00:15:50.903 "peer_address": { 00:15:50.903 "trtype": "TCP", 00:15:50.903 "adrfam": "IPv4", 00:15:50.903 "traddr": "10.0.0.1", 00:15:50.903 "trsvcid": "38074" 00:15:50.903 }, 00:15:50.903 "auth": { 00:15:50.903 "state": "completed", 00:15:50.903 "digest": "sha256", 00:15:50.903 "dhgroup": "null" 00:15:50.903 } 00:15:50.903 } 00:15:50.903 ]' 00:15:50.903 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.903 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.903 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.903 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:50.903 15:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.903 15:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.903 15:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.903 15:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.162 15:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:15:51.162 15:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:15:51.729 15:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.729 15:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:51.729 15:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.729 15:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.729 15:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.729 15:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.729 15:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:51.730 15:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:51.988 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:51.988 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.988 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:51.988 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:51.988 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:51.988 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.988 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:15:51.988 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.988 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.988 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.988 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:51.988 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.988 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:52.247 00:15:52.247 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.247 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.247 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.247 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.247 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.247 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.247 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.506 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.506 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.506 { 00:15:52.506 "cntlid": 7, 00:15:52.506 "qid": 0, 00:15:52.506 "state": "enabled", 00:15:52.506 "thread": "nvmf_tgt_poll_group_000", 00:15:52.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:52.506 "listen_address": { 00:15:52.506 "trtype": "TCP", 00:15:52.506 "adrfam": "IPv4", 00:15:52.506 "traddr": "10.0.0.2", 00:15:52.506 "trsvcid": "4420" 00:15:52.506 }, 00:15:52.506 "peer_address": { 00:15:52.506 "trtype": "TCP", 00:15:52.506 "adrfam": "IPv4", 00:15:52.506 "traddr": "10.0.0.1", 00:15:52.506 "trsvcid": "38106" 00:15:52.506 }, 00:15:52.506 "auth": { 00:15:52.506 "state": "completed", 00:15:52.506 "digest": "sha256", 00:15:52.506 "dhgroup": "null" 00:15:52.506 } 00:15:52.506 } 00:15:52.506 ]' 00:15:52.506 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.506 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.506 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.506 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:52.506 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.506 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.506 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.506 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.765 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:15:52.765 15:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:15:53.333 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.333 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:53.333 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.333 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.333 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.333 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.333 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.333 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.333 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.593 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:53.593 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.593 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:53.593 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:53.593 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:53.593 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.593 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.593 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.593 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.593 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.593 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.593 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.593 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.852 00:15:53.852 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.852 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.852 15:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.852 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.852 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.852 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.852 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.852 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.852 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.852 { 00:15:53.852 "cntlid": 9, 00:15:53.852 "qid": 0, 00:15:53.852 "state": "enabled", 00:15:53.852 "thread": "nvmf_tgt_poll_group_000", 00:15:53.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:53.852 "listen_address": { 00:15:53.852 "trtype": "TCP", 00:15:53.852 "adrfam": "IPv4", 00:15:53.852 "traddr": "10.0.0.2", 00:15:53.852 "trsvcid": "4420" 00:15:53.852 }, 00:15:53.852 "peer_address": { 00:15:53.852 "trtype": "TCP", 00:15:53.852 "adrfam": "IPv4", 00:15:53.852 "traddr": "10.0.0.1", 00:15:53.852 "trsvcid": "38136" 00:15:53.852 }, 00:15:53.852 "auth": { 00:15:53.852 "state": "completed", 00:15:53.852 "digest": "sha256", 00:15:53.852 "dhgroup": "ffdhe2048" 00:15:53.852 } 00:15:53.852 } 00:15:53.852 ]' 00:15:53.852 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.111 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.111 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.111 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:54.111 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.111 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.111 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.111 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.370 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:15:54.370 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:15:54.938 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.938 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:54.938 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.938 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.938 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.938 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.938 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:54.938 15:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:55.197 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:55.197 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.197 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.197 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:55.197 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:55.197 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.197 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.197 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.197 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.197 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.197 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.197 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.197 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.456 00:15:55.456 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.456 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.456 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.456 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.456 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.456 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.456 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.456 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.456 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.456 { 00:15:55.456 "cntlid": 11, 00:15:55.456 "qid": 0, 00:15:55.456 "state": "enabled", 00:15:55.456 "thread": "nvmf_tgt_poll_group_000", 00:15:55.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:55.456 "listen_address": { 00:15:55.456 "trtype": "TCP", 00:15:55.456 "adrfam": "IPv4", 00:15:55.456 "traddr": "10.0.0.2", 00:15:55.456 "trsvcid": "4420" 00:15:55.456 }, 00:15:55.456 "peer_address": { 00:15:55.456 "trtype": "TCP", 00:15:55.456 "adrfam": "IPv4", 00:15:55.456 "traddr": "10.0.0.1", 00:15:55.456 "trsvcid": "38166" 00:15:55.456 }, 00:15:55.456 "auth": { 00:15:55.456 "state": "completed", 00:15:55.456 "digest": "sha256", 00:15:55.456 "dhgroup": "ffdhe2048" 00:15:55.456 } 00:15:55.456 } 00:15:55.456 ]' 00:15:55.456 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.715 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.715 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.715 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:55.715 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.715 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.715 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.715 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.974 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:15:55.974 15:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:15:56.542 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.542 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:56.542 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.542 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.542 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.542 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.543 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:56.543 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:56.543 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:56.543 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.543 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:56.543 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:56.543 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:56.543 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.543 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.543 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.543 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.802 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.802 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.802 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.802 15:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.802 00:15:57.061 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.061 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.061 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.061 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.061 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.061 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.061 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.061 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.061 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.061 { 00:15:57.061 "cntlid": 13, 00:15:57.061 "qid": 0, 00:15:57.061 "state": "enabled", 00:15:57.061 "thread": "nvmf_tgt_poll_group_000", 00:15:57.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:57.061 "listen_address": { 00:15:57.061 "trtype": "TCP", 00:15:57.061 "adrfam": "IPv4", 00:15:57.061 "traddr": "10.0.0.2", 00:15:57.061 "trsvcid": "4420" 00:15:57.061 }, 00:15:57.061 "peer_address": { 00:15:57.061 "trtype": "TCP", 00:15:57.061 "adrfam": "IPv4", 00:15:57.061 "traddr": "10.0.0.1", 00:15:57.061 "trsvcid": "38188" 00:15:57.061 }, 00:15:57.061 "auth": { 00:15:57.061 "state": "completed", 00:15:57.061 "digest": "sha256", 00:15:57.061 "dhgroup": "ffdhe2048" 00:15:57.061 } 00:15:57.061 } 00:15:57.061 ]' 00:15:57.061 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.061 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.061 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.321 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.321 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.321 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.321 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.321 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.710 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:15:57.710 15:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:15:57.970 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.970 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:57.970 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.970 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.970 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.970 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.970 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:57.970 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:58.228 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:58.228 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.229 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:58.229 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:58.229 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:58.229 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.229 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:15:58.229 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.229 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.229 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.229 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:58.229 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.229 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.488 00:15:58.488 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.488 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.488 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.747 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.747 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.747 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.747 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.747 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.747 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.747 { 00:15:58.747 "cntlid": 15, 00:15:58.747 "qid": 0, 00:15:58.747 "state": "enabled", 00:15:58.747 "thread": "nvmf_tgt_poll_group_000", 00:15:58.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:15:58.747 "listen_address": { 00:15:58.747 "trtype": "TCP", 00:15:58.747 "adrfam": "IPv4", 00:15:58.747 "traddr": "10.0.0.2", 00:15:58.747 "trsvcid": "4420" 00:15:58.747 }, 00:15:58.747 "peer_address": { 00:15:58.747 "trtype": "TCP", 00:15:58.747 "adrfam": "IPv4", 00:15:58.747 "traddr": "10.0.0.1", 00:15:58.747 "trsvcid": "38210" 00:15:58.747 }, 00:15:58.747 "auth": { 00:15:58.747 "state": "completed", 00:15:58.747 "digest": "sha256", 00:15:58.747 "dhgroup": "ffdhe2048" 00:15:58.747 } 00:15:58.747 } 00:15:58.747 ]' 00:15:58.747 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.747 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.747 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.747 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.747 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.747 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.747 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.747 15:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.006 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:15:59.006 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:15:59.573 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.573 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:59.573 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.573 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.573 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.573 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.573 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.573 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:59.573 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:59.832 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:59.832 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.832 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:59.832 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:59.832 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:59.832 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.832 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.832 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.832 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.832 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.832 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.832 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.832 15:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.096 00:16:00.096 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.096 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.096 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.357 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.357 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.357 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.357 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.357 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.357 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.357 { 00:16:00.357 "cntlid": 17, 00:16:00.357 "qid": 0, 00:16:00.357 "state": "enabled", 00:16:00.357 "thread": "nvmf_tgt_poll_group_000", 00:16:00.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:00.357 "listen_address": { 00:16:00.357 "trtype": "TCP", 00:16:00.357 "adrfam": "IPv4", 00:16:00.357 "traddr": "10.0.0.2", 00:16:00.357 "trsvcid": "4420" 00:16:00.357 }, 00:16:00.357 "peer_address": { 00:16:00.357 "trtype": "TCP", 00:16:00.357 "adrfam": "IPv4", 00:16:00.357 "traddr": "10.0.0.1", 00:16:00.357 "trsvcid": "36950" 00:16:00.357 }, 00:16:00.357 "auth": { 00:16:00.357 "state": "completed", 00:16:00.357 "digest": "sha256", 00:16:00.357 "dhgroup": "ffdhe3072" 00:16:00.357 } 00:16:00.357 } 00:16:00.357 ]' 00:16:00.357 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.357 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.357 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.357 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:00.357 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.357 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.357 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.357 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.616 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:00.616 15:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:01.185 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.185 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:01.185 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.185 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.185 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.185 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.185 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:01.185 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:01.444 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:01.444 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.444 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:01.444 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:01.444 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:01.444 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.444 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.444 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.444 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.444 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.444 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.444 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.444 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.702 00:16:01.702 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.702 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.702 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.961 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.961 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.961 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.961 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.961 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.961 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.961 { 00:16:01.961 "cntlid": 19, 00:16:01.961 "qid": 0, 00:16:01.961 "state": "enabled", 00:16:01.961 "thread": "nvmf_tgt_poll_group_000", 00:16:01.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:01.961 "listen_address": { 00:16:01.961 "trtype": "TCP", 00:16:01.961 "adrfam": "IPv4", 00:16:01.961 "traddr": "10.0.0.2", 00:16:01.961 "trsvcid": "4420" 00:16:01.961 }, 00:16:01.961 "peer_address": { 00:16:01.961 "trtype": "TCP", 00:16:01.961 "adrfam": "IPv4", 00:16:01.961 "traddr": "10.0.0.1", 00:16:01.961 "trsvcid": "36984" 00:16:01.961 }, 00:16:01.961 "auth": { 00:16:01.961 "state": "completed", 00:16:01.961 "digest": "sha256", 00:16:01.961 "dhgroup": "ffdhe3072" 00:16:01.961 } 00:16:01.961 } 00:16:01.961 ]' 00:16:01.961 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.961 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.961 15:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.961 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:01.961 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.961 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.961 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.961 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.220 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:02.220 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:02.787 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.787 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:02.787 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.787 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.787 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.787 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.787 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:02.787 15:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:03.046 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:03.046 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.046 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:03.046 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:03.046 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:03.046 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.046 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.046 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.046 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.046 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.046 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.046 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.046 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.304 00:16:03.304 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.304 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.304 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.304 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.304 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.304 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.305 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.305 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.305 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.305 { 00:16:03.305 "cntlid": 21, 00:16:03.305 "qid": 0, 00:16:03.305 "state": "enabled", 00:16:03.305 "thread": "nvmf_tgt_poll_group_000", 00:16:03.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:03.305 "listen_address": { 00:16:03.305 "trtype": "TCP", 00:16:03.305 "adrfam": "IPv4", 00:16:03.305 "traddr": "10.0.0.2", 00:16:03.305 "trsvcid": "4420" 00:16:03.305 }, 00:16:03.305 "peer_address": { 00:16:03.305 "trtype": "TCP", 00:16:03.305 "adrfam": "IPv4", 00:16:03.305 "traddr": "10.0.0.1", 00:16:03.305 "trsvcid": "37012" 00:16:03.305 }, 00:16:03.305 "auth": { 00:16:03.305 "state": "completed", 00:16:03.305 "digest": "sha256", 00:16:03.305 "dhgroup": "ffdhe3072" 00:16:03.305 } 00:16:03.305 } 00:16:03.305 ]' 00:16:03.305 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.578 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.579 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.579 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:03.579 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.579 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.579 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.579 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.838 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:03.838 15:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:04.405 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.405 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:04.405 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.405 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.405 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.405 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.405 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.405 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.664 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:04.664 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.664 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:04.664 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:04.664 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:04.664 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.664 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:04.664 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.664 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.664 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.664 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:04.664 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.664 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.926 00:16:04.926 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.926 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.926 15:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.926 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.926 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.926 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.926 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.926 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.926 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.926 { 00:16:04.926 "cntlid": 23, 00:16:04.926 "qid": 0, 00:16:04.926 "state": "enabled", 00:16:04.926 "thread": "nvmf_tgt_poll_group_000", 00:16:04.926 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:04.926 "listen_address": { 00:16:04.926 "trtype": "TCP", 00:16:04.926 "adrfam": "IPv4", 00:16:04.926 "traddr": "10.0.0.2", 00:16:04.926 "trsvcid": "4420" 00:16:04.926 }, 00:16:04.926 "peer_address": { 00:16:04.926 "trtype": "TCP", 00:16:04.926 "adrfam": "IPv4", 00:16:04.926 "traddr": "10.0.0.1", 00:16:04.926 "trsvcid": "37034" 00:16:04.926 }, 00:16:04.926 "auth": { 00:16:04.926 "state": "completed", 00:16:04.926 "digest": "sha256", 00:16:04.926 "dhgroup": "ffdhe3072" 00:16:04.926 } 00:16:04.926 } 00:16:04.926 ]' 00:16:04.926 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.217 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.217 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.217 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.217 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.217 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.217 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.217 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.560 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:05.560 15:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:05.838 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.838 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:05.838 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.838 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.838 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.838 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.097 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.097 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:06.097 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:06.097 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:06.097 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.097 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.097 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:06.097 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:06.097 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.097 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.097 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.097 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.097 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.097 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.097 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.098 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.357 00:16:06.357 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.357 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.357 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.616 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.616 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.616 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.616 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.616 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.616 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.616 { 00:16:06.616 "cntlid": 25, 00:16:06.616 "qid": 0, 00:16:06.616 "state": "enabled", 00:16:06.616 "thread": "nvmf_tgt_poll_group_000", 00:16:06.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:06.616 "listen_address": { 00:16:06.616 "trtype": "TCP", 00:16:06.616 "adrfam": "IPv4", 00:16:06.616 "traddr": "10.0.0.2", 00:16:06.616 "trsvcid": "4420" 00:16:06.616 }, 00:16:06.616 "peer_address": { 00:16:06.616 "trtype": "TCP", 00:16:06.616 "adrfam": "IPv4", 00:16:06.616 "traddr": "10.0.0.1", 00:16:06.616 "trsvcid": "37062" 00:16:06.616 }, 00:16:06.616 "auth": { 00:16:06.616 "state": "completed", 00:16:06.616 "digest": "sha256", 00:16:06.616 "dhgroup": "ffdhe4096" 00:16:06.616 } 00:16:06.616 } 00:16:06.616 ]' 00:16:06.616 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.616 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.616 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.876 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:06.876 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.876 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.876 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.876 15:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.876 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:06.876 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:07.446 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.446 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:07.446 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.446 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.446 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.446 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.446 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:07.446 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:07.705 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:07.705 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.705 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:07.706 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:07.706 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:07.706 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.706 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.706 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.706 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.706 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.706 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.706 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.706 15:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.965 00:16:07.965 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.965 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.965 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.223 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.224 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.224 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.224 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.224 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.224 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.224 { 00:16:08.224 "cntlid": 27, 00:16:08.224 "qid": 0, 00:16:08.224 "state": "enabled", 00:16:08.224 "thread": "nvmf_tgt_poll_group_000", 00:16:08.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:08.224 "listen_address": { 00:16:08.224 "trtype": "TCP", 00:16:08.224 "adrfam": "IPv4", 00:16:08.224 "traddr": "10.0.0.2", 00:16:08.224 "trsvcid": "4420" 00:16:08.224 }, 00:16:08.224 "peer_address": { 00:16:08.224 "trtype": "TCP", 00:16:08.224 "adrfam": "IPv4", 00:16:08.224 "traddr": "10.0.0.1", 00:16:08.224 "trsvcid": "37088" 00:16:08.224 }, 00:16:08.224 "auth": { 00:16:08.224 "state": "completed", 00:16:08.224 "digest": "sha256", 00:16:08.224 "dhgroup": "ffdhe4096" 00:16:08.224 } 00:16:08.224 } 00:16:08.224 ]' 00:16:08.224 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.224 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.224 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.224 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:08.224 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.482 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.482 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.482 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.483 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:08.483 15:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:09.050 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.050 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:09.050 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.050 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.050 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.050 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.050 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:09.050 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:09.310 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:09.310 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.310 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:09.310 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:09.310 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:09.310 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.310 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.310 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.310 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.310 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.310 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.310 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.310 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.569 00:16:09.569 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.569 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.569 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.829 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.829 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.829 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.829 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.829 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.829 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.829 { 00:16:09.829 "cntlid": 29, 00:16:09.829 "qid": 0, 00:16:09.829 "state": "enabled", 00:16:09.829 "thread": "nvmf_tgt_poll_group_000", 00:16:09.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:09.829 "listen_address": { 00:16:09.829 "trtype": "TCP", 00:16:09.829 "adrfam": "IPv4", 00:16:09.829 "traddr": "10.0.0.2", 00:16:09.829 "trsvcid": "4420" 00:16:09.829 }, 00:16:09.829 "peer_address": { 00:16:09.829 "trtype": "TCP", 00:16:09.829 "adrfam": "IPv4", 00:16:09.829 "traddr": "10.0.0.1", 00:16:09.829 "trsvcid": "53878" 00:16:09.829 }, 00:16:09.829 "auth": { 00:16:09.829 "state": "completed", 00:16:09.829 "digest": "sha256", 00:16:09.829 "dhgroup": "ffdhe4096" 00:16:09.829 } 00:16:09.829 } 00:16:09.829 ]' 00:16:09.829 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.829 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.829 15:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.829 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:09.829 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.829 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.829 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.088 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.088 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:10.088 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:10.657 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.657 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:10.657 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.657 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.657 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.657 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.657 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.657 15:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.915 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:10.915 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.915 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.915 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:10.915 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:10.915 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.915 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:10.915 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.915 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.915 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.915 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:10.915 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:10.915 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.173 00:16:11.173 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.173 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.173 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.432 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.432 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.432 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.432 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.432 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.432 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.432 { 00:16:11.432 "cntlid": 31, 00:16:11.432 "qid": 0, 00:16:11.432 "state": "enabled", 00:16:11.432 "thread": "nvmf_tgt_poll_group_000", 00:16:11.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:11.432 "listen_address": { 00:16:11.432 "trtype": "TCP", 00:16:11.432 "adrfam": "IPv4", 00:16:11.432 "traddr": "10.0.0.2", 00:16:11.432 "trsvcid": "4420" 00:16:11.432 }, 00:16:11.432 "peer_address": { 00:16:11.432 "trtype": "TCP", 00:16:11.432 "adrfam": "IPv4", 00:16:11.432 "traddr": "10.0.0.1", 00:16:11.432 "trsvcid": "53898" 00:16:11.432 }, 00:16:11.432 "auth": { 00:16:11.432 "state": "completed", 00:16:11.432 "digest": "sha256", 00:16:11.432 "dhgroup": "ffdhe4096" 00:16:11.432 } 00:16:11.432 } 00:16:11.432 ]' 00:16:11.432 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.433 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.433 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.433 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:11.433 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.433 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.433 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.433 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.692 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:11.692 15:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:12.260 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.260 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:12.260 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.260 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.260 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.260 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.260 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.260 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:12.260 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:12.520 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:12.520 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.520 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.520 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:12.520 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:12.520 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.520 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.520 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.520 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.520 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.520 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.520 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.520 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.779 00:16:12.779 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.779 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.779 15:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.039 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.039 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.039 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.039 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.039 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.039 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.039 { 00:16:13.039 "cntlid": 33, 00:16:13.039 "qid": 0, 00:16:13.039 "state": "enabled", 00:16:13.039 "thread": "nvmf_tgt_poll_group_000", 00:16:13.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:13.039 "listen_address": { 00:16:13.039 "trtype": "TCP", 00:16:13.039 "adrfam": "IPv4", 00:16:13.039 "traddr": "10.0.0.2", 00:16:13.039 "trsvcid": "4420" 00:16:13.039 }, 00:16:13.039 "peer_address": { 00:16:13.039 "trtype": "TCP", 00:16:13.039 "adrfam": "IPv4", 00:16:13.039 "traddr": "10.0.0.1", 00:16:13.039 "trsvcid": "53914" 00:16:13.039 }, 00:16:13.039 "auth": { 00:16:13.039 "state": "completed", 00:16:13.039 "digest": "sha256", 00:16:13.039 "dhgroup": "ffdhe6144" 00:16:13.039 } 00:16:13.039 } 00:16:13.039 ]' 00:16:13.039 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.039 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.039 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.039 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:13.039 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.298 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.298 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.298 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.298 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:13.298 15:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:13.867 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.867 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:13.867 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.867 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.867 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.867 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.867 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:13.867 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.126 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:14.126 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.126 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:14.126 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:14.126 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:14.126 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.126 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.126 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.126 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.126 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.126 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.126 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.126 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.386 00:16:14.645 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.645 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.645 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.645 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.646 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.646 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.646 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.646 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.646 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.646 { 00:16:14.646 "cntlid": 35, 00:16:14.646 "qid": 0, 00:16:14.646 "state": "enabled", 00:16:14.646 "thread": "nvmf_tgt_poll_group_000", 00:16:14.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:14.646 "listen_address": { 00:16:14.646 "trtype": "TCP", 00:16:14.646 "adrfam": "IPv4", 00:16:14.646 "traddr": "10.0.0.2", 00:16:14.646 "trsvcid": "4420" 00:16:14.646 }, 00:16:14.646 "peer_address": { 00:16:14.646 "trtype": "TCP", 00:16:14.646 "adrfam": "IPv4", 00:16:14.646 "traddr": "10.0.0.1", 00:16:14.646 "trsvcid": "53946" 00:16:14.646 }, 00:16:14.646 "auth": { 00:16:14.646 "state": "completed", 00:16:14.646 "digest": "sha256", 00:16:14.646 "dhgroup": "ffdhe6144" 00:16:14.646 } 00:16:14.646 } 00:16:14.646 ]' 00:16:14.646 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.905 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.905 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.905 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:14.905 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.905 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.905 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.905 15:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.165 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:15.165 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:15.733 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.734 15:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.302 00:16:16.302 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.302 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.302 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.302 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.302 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.302 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.302 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.302 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.302 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.302 { 00:16:16.302 "cntlid": 37, 00:16:16.302 "qid": 0, 00:16:16.302 "state": "enabled", 00:16:16.302 "thread": "nvmf_tgt_poll_group_000", 00:16:16.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:16.302 "listen_address": { 00:16:16.302 "trtype": "TCP", 00:16:16.302 "adrfam": "IPv4", 00:16:16.302 "traddr": "10.0.0.2", 00:16:16.302 "trsvcid": "4420" 00:16:16.302 }, 00:16:16.302 "peer_address": { 00:16:16.302 "trtype": "TCP", 00:16:16.302 "adrfam": "IPv4", 00:16:16.302 "traddr": "10.0.0.1", 00:16:16.302 "trsvcid": "53980" 00:16:16.302 }, 00:16:16.302 "auth": { 00:16:16.302 "state": "completed", 00:16:16.302 "digest": "sha256", 00:16:16.302 "dhgroup": "ffdhe6144" 00:16:16.302 } 00:16:16.302 } 00:16:16.302 ]' 00:16:16.302 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.561 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.561 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.561 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:16.561 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.561 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.561 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.561 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.821 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:16.821 15:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:17.391 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.391 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:17.391 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.391 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.391 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.391 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.391 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:17.391 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:17.391 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:17.391 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.391 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:17.391 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:17.391 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:17.391 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.391 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:17.391 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.391 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.651 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.651 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:17.651 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:17.651 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:17.910 00:16:17.910 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.910 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.910 15:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.169 15:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.169 15:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.169 15:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.169 15:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.169 15:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.169 15:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.169 { 00:16:18.169 "cntlid": 39, 00:16:18.169 "qid": 0, 00:16:18.169 "state": "enabled", 00:16:18.169 "thread": "nvmf_tgt_poll_group_000", 00:16:18.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:18.169 "listen_address": { 00:16:18.169 "trtype": "TCP", 00:16:18.169 "adrfam": "IPv4", 00:16:18.169 "traddr": "10.0.0.2", 00:16:18.169 "trsvcid": "4420" 00:16:18.169 }, 00:16:18.169 "peer_address": { 00:16:18.169 "trtype": "TCP", 00:16:18.169 "adrfam": "IPv4", 00:16:18.169 "traddr": "10.0.0.1", 00:16:18.169 "trsvcid": "54002" 00:16:18.169 }, 00:16:18.169 "auth": { 00:16:18.169 "state": "completed", 00:16:18.169 "digest": "sha256", 00:16:18.169 "dhgroup": "ffdhe6144" 00:16:18.169 } 00:16:18.169 } 00:16:18.169 ]' 00:16:18.169 15:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.169 15:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.169 15:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.169 15:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:18.170 15:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.170 15:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.170 15:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.170 15:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.429 15:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:18.429 15:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:18.997 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.997 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:18.997 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.997 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.997 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.997 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.997 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.998 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:18.998 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:19.257 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:19.257 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.257 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:19.257 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:19.257 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:19.257 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.257 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.257 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.257 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.257 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.257 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.257 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.257 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.826 00:16:19.826 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.826 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.826 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.826 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.826 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.826 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.826 15:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.826 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.826 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.826 { 00:16:19.826 "cntlid": 41, 00:16:19.826 "qid": 0, 00:16:19.826 "state": "enabled", 00:16:19.826 "thread": "nvmf_tgt_poll_group_000", 00:16:19.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:19.826 "listen_address": { 00:16:19.826 "trtype": "TCP", 00:16:19.826 "adrfam": "IPv4", 00:16:19.826 "traddr": "10.0.0.2", 00:16:19.826 "trsvcid": "4420" 00:16:19.826 }, 00:16:19.826 "peer_address": { 00:16:19.826 "trtype": "TCP", 00:16:19.826 "adrfam": "IPv4", 00:16:19.826 "traddr": "10.0.0.1", 00:16:19.826 "trsvcid": "40934" 00:16:19.826 }, 00:16:19.826 "auth": { 00:16:19.826 "state": "completed", 00:16:19.826 "digest": "sha256", 00:16:19.826 "dhgroup": "ffdhe8192" 00:16:19.826 } 00:16:19.826 } 00:16:19.826 ]' 00:16:19.826 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.826 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.826 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.091 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.091 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.091 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.091 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.091 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.352 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:20.352 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:20.921 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.921 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:20.921 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.921 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.921 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.921 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.921 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:20.921 15:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:20.921 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:20.921 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.921 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.921 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:20.921 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:20.921 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.921 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.921 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.921 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.921 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.921 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.921 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.921 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.490 00:16:21.490 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.490 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.490 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.749 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.749 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.749 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.749 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.749 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.749 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.749 { 00:16:21.749 "cntlid": 43, 00:16:21.749 "qid": 0, 00:16:21.749 "state": "enabled", 00:16:21.749 "thread": "nvmf_tgt_poll_group_000", 00:16:21.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:21.749 "listen_address": { 00:16:21.749 "trtype": "TCP", 00:16:21.749 "adrfam": "IPv4", 00:16:21.749 "traddr": "10.0.0.2", 00:16:21.749 "trsvcid": "4420" 00:16:21.749 }, 00:16:21.749 "peer_address": { 00:16:21.749 "trtype": "TCP", 00:16:21.749 "adrfam": "IPv4", 00:16:21.749 "traddr": "10.0.0.1", 00:16:21.749 "trsvcid": "40960" 00:16:21.749 }, 00:16:21.749 "auth": { 00:16:21.749 "state": "completed", 00:16:21.749 "digest": "sha256", 00:16:21.749 "dhgroup": "ffdhe8192" 00:16:21.749 } 00:16:21.749 } 00:16:21.749 ]' 00:16:21.749 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.749 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.749 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.749 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:21.749 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.749 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.749 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.749 15:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.009 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:22.009 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:22.578 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.578 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:22.578 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.578 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.578 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.578 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.578 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:22.578 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:22.837 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:22.837 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.837 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.837 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:22.837 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:22.837 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.837 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.837 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.837 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.837 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.837 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.837 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.837 15:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.405 00:16:23.405 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.405 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.405 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.664 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.664 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.664 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.664 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.664 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.664 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.664 { 00:16:23.664 "cntlid": 45, 00:16:23.664 "qid": 0, 00:16:23.664 "state": "enabled", 00:16:23.664 "thread": "nvmf_tgt_poll_group_000", 00:16:23.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:23.664 "listen_address": { 00:16:23.664 "trtype": "TCP", 00:16:23.664 "adrfam": "IPv4", 00:16:23.664 "traddr": "10.0.0.2", 00:16:23.664 "trsvcid": "4420" 00:16:23.664 }, 00:16:23.664 "peer_address": { 00:16:23.664 "trtype": "TCP", 00:16:23.664 "adrfam": "IPv4", 00:16:23.664 "traddr": "10.0.0.1", 00:16:23.664 "trsvcid": "40974" 00:16:23.664 }, 00:16:23.664 "auth": { 00:16:23.664 "state": "completed", 00:16:23.664 "digest": "sha256", 00:16:23.664 "dhgroup": "ffdhe8192" 00:16:23.664 } 00:16:23.664 } 00:16:23.664 ]' 00:16:23.664 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.664 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.664 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.664 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:23.664 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.664 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.664 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.664 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.924 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:23.924 15:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:24.493 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.493 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:24.493 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.493 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.493 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.493 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.493 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:24.493 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:24.752 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:24.752 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.752 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.752 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:24.752 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:24.752 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.752 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:24.752 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.752 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.752 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.752 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:24.752 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.752 15:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.011 00:16:25.011 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.011 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.011 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.271 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.271 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.271 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.271 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.271 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.271 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.271 { 00:16:25.271 "cntlid": 47, 00:16:25.271 "qid": 0, 00:16:25.271 "state": "enabled", 00:16:25.271 "thread": "nvmf_tgt_poll_group_000", 00:16:25.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:25.271 "listen_address": { 00:16:25.271 "trtype": "TCP", 00:16:25.271 "adrfam": "IPv4", 00:16:25.271 "traddr": "10.0.0.2", 00:16:25.271 "trsvcid": "4420" 00:16:25.271 }, 00:16:25.271 "peer_address": { 00:16:25.271 "trtype": "TCP", 00:16:25.271 "adrfam": "IPv4", 00:16:25.271 "traddr": "10.0.0.1", 00:16:25.271 "trsvcid": "41016" 00:16:25.271 }, 00:16:25.271 "auth": { 00:16:25.271 "state": "completed", 00:16:25.271 "digest": "sha256", 00:16:25.271 "dhgroup": "ffdhe8192" 00:16:25.271 } 00:16:25.271 } 00:16:25.271 ]' 00:16:25.271 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.271 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.271 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.530 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:25.530 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.530 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.530 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.530 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.790 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:25.790 15:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.359 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.619 00:16:26.619 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.619 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.619 15:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.878 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.878 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.878 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.878 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.878 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.878 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.878 { 00:16:26.878 "cntlid": 49, 00:16:26.878 "qid": 0, 00:16:26.878 "state": "enabled", 00:16:26.878 "thread": "nvmf_tgt_poll_group_000", 00:16:26.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:26.878 "listen_address": { 00:16:26.878 "trtype": "TCP", 00:16:26.878 "adrfam": "IPv4", 00:16:26.878 "traddr": "10.0.0.2", 00:16:26.878 "trsvcid": "4420" 00:16:26.878 }, 00:16:26.878 "peer_address": { 00:16:26.878 "trtype": "TCP", 00:16:26.878 "adrfam": "IPv4", 00:16:26.878 "traddr": "10.0.0.1", 00:16:26.878 "trsvcid": "41040" 00:16:26.878 }, 00:16:26.878 "auth": { 00:16:26.878 "state": "completed", 00:16:26.878 "digest": "sha384", 00:16:26.878 "dhgroup": "null" 00:16:26.878 } 00:16:26.878 } 00:16:26.878 ]' 00:16:26.878 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.878 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.878 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.878 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:26.878 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.140 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.140 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.140 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.140 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:27.140 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:27.711 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.711 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:27.711 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.711 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.711 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.711 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.711 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:27.711 15:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:27.971 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:27.971 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.971 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.971 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:27.971 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:27.971 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.971 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.971 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.971 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.971 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.971 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.971 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.971 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.230 00:16:28.230 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.230 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.230 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.490 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.490 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.490 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.490 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.490 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.490 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.490 { 00:16:28.490 "cntlid": 51, 00:16:28.490 "qid": 0, 00:16:28.490 "state": "enabled", 00:16:28.490 "thread": "nvmf_tgt_poll_group_000", 00:16:28.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:28.490 "listen_address": { 00:16:28.490 "trtype": "TCP", 00:16:28.490 "adrfam": "IPv4", 00:16:28.490 "traddr": "10.0.0.2", 00:16:28.490 "trsvcid": "4420" 00:16:28.490 }, 00:16:28.490 "peer_address": { 00:16:28.490 "trtype": "TCP", 00:16:28.490 "adrfam": "IPv4", 00:16:28.490 "traddr": "10.0.0.1", 00:16:28.490 "trsvcid": "41074" 00:16:28.490 }, 00:16:28.490 "auth": { 00:16:28.490 "state": "completed", 00:16:28.490 "digest": "sha384", 00:16:28.490 "dhgroup": "null" 00:16:28.490 } 00:16:28.490 } 00:16:28.490 ]' 00:16:28.490 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.490 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.490 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.490 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.490 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.490 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.490 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.490 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.749 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:28.749 15:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:29.318 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.318 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:29.318 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.318 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.318 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.318 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.318 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:29.318 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:29.579 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:29.579 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.579 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.579 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:29.579 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.579 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.579 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.579 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.579 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.579 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.579 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.579 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.579 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.838 00:16:29.838 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.838 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.838 15:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.095 15:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.095 15:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.095 15:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.095 15:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.095 15:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.095 15:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.095 { 00:16:30.095 "cntlid": 53, 00:16:30.095 "qid": 0, 00:16:30.095 "state": "enabled", 00:16:30.095 "thread": "nvmf_tgt_poll_group_000", 00:16:30.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:30.095 "listen_address": { 00:16:30.095 "trtype": "TCP", 00:16:30.095 "adrfam": "IPv4", 00:16:30.095 "traddr": "10.0.0.2", 00:16:30.095 "trsvcid": "4420" 00:16:30.095 }, 00:16:30.095 "peer_address": { 00:16:30.095 "trtype": "TCP", 00:16:30.095 "adrfam": "IPv4", 00:16:30.095 "traddr": "10.0.0.1", 00:16:30.095 "trsvcid": "35950" 00:16:30.095 }, 00:16:30.095 "auth": { 00:16:30.095 "state": "completed", 00:16:30.095 "digest": "sha384", 00:16:30.095 "dhgroup": "null" 00:16:30.095 } 00:16:30.095 } 00:16:30.095 ]' 00:16:30.095 15:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.096 15:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.096 15:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.096 15:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:30.096 15:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.096 15:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.096 15:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.096 15:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.353 15:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:30.353 15:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:30.922 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.922 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:30.922 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.922 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.922 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.922 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.922 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:30.922 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:31.181 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:31.181 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.181 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.181 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:31.181 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.181 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.181 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:31.181 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.181 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.181 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.181 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.181 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.181 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.440 00:16:31.440 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.440 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.440 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.698 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.698 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.698 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.698 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.698 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.698 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.698 { 00:16:31.698 "cntlid": 55, 00:16:31.698 "qid": 0, 00:16:31.698 "state": "enabled", 00:16:31.698 "thread": "nvmf_tgt_poll_group_000", 00:16:31.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:31.698 "listen_address": { 00:16:31.698 "trtype": "TCP", 00:16:31.698 "adrfam": "IPv4", 00:16:31.698 "traddr": "10.0.0.2", 00:16:31.698 "trsvcid": "4420" 00:16:31.698 }, 00:16:31.698 "peer_address": { 00:16:31.698 "trtype": "TCP", 00:16:31.698 "adrfam": "IPv4", 00:16:31.698 "traddr": "10.0.0.1", 00:16:31.698 "trsvcid": "35980" 00:16:31.698 }, 00:16:31.698 "auth": { 00:16:31.698 "state": "completed", 00:16:31.698 "digest": "sha384", 00:16:31.698 "dhgroup": "null" 00:16:31.698 } 00:16:31.698 } 00:16:31.698 ]' 00:16:31.698 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.698 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.698 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.698 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:31.698 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.698 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.698 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.698 15:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.957 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:31.957 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:32.526 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.526 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:32.526 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.526 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.526 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.526 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.526 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.526 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:32.526 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:32.785 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:32.786 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.786 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:32.786 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:32.786 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:32.786 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.786 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.786 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.786 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.786 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.786 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.786 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.786 15:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.045 00:16:33.045 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.045 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.045 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.304 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.304 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.304 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.304 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.304 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.304 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.304 { 00:16:33.304 "cntlid": 57, 00:16:33.304 "qid": 0, 00:16:33.304 "state": "enabled", 00:16:33.304 "thread": "nvmf_tgt_poll_group_000", 00:16:33.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:33.304 "listen_address": { 00:16:33.304 "trtype": "TCP", 00:16:33.304 "adrfam": "IPv4", 00:16:33.304 "traddr": "10.0.0.2", 00:16:33.304 "trsvcid": "4420" 00:16:33.304 }, 00:16:33.304 "peer_address": { 00:16:33.304 "trtype": "TCP", 00:16:33.304 "adrfam": "IPv4", 00:16:33.304 "traddr": "10.0.0.1", 00:16:33.304 "trsvcid": "36006" 00:16:33.304 }, 00:16:33.304 "auth": { 00:16:33.304 "state": "completed", 00:16:33.304 "digest": "sha384", 00:16:33.304 "dhgroup": "ffdhe2048" 00:16:33.304 } 00:16:33.304 } 00:16:33.304 ]' 00:16:33.304 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.304 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.304 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.304 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.304 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.304 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.304 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.304 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.563 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:33.563 15:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:34.131 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.131 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:34.131 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.131 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.131 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.131 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.131 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:34.131 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:34.389 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:34.389 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.389 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:34.389 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:34.389 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.389 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.389 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.389 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.389 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.389 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.389 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.389 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.389 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.648 00:16:34.648 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.648 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.648 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.906 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.906 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.906 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.906 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.906 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.906 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.906 { 00:16:34.906 "cntlid": 59, 00:16:34.906 "qid": 0, 00:16:34.906 "state": "enabled", 00:16:34.906 "thread": "nvmf_tgt_poll_group_000", 00:16:34.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:34.906 "listen_address": { 00:16:34.906 "trtype": "TCP", 00:16:34.906 "adrfam": "IPv4", 00:16:34.906 "traddr": "10.0.0.2", 00:16:34.906 "trsvcid": "4420" 00:16:34.906 }, 00:16:34.906 "peer_address": { 00:16:34.906 "trtype": "TCP", 00:16:34.906 "adrfam": "IPv4", 00:16:34.906 "traddr": "10.0.0.1", 00:16:34.906 "trsvcid": "36052" 00:16:34.906 }, 00:16:34.906 "auth": { 00:16:34.906 "state": "completed", 00:16:34.906 "digest": "sha384", 00:16:34.906 "dhgroup": "ffdhe2048" 00:16:34.906 } 00:16:34.906 } 00:16:34.906 ]' 00:16:34.906 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.906 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.907 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.907 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.907 15:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.907 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.907 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.907 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.166 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:35.166 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:35.735 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.735 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:35.735 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.735 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.735 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.735 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.735 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:35.735 15:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:35.994 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:35.994 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.994 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.994 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:35.994 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:35.994 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.994 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.994 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.994 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.994 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.994 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.994 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.994 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.253 00:16:36.253 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.253 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.253 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.253 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.253 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.253 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.253 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.512 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.512 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.512 { 00:16:36.512 "cntlid": 61, 00:16:36.512 "qid": 0, 00:16:36.512 "state": "enabled", 00:16:36.512 "thread": "nvmf_tgt_poll_group_000", 00:16:36.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:36.512 "listen_address": { 00:16:36.512 "trtype": "TCP", 00:16:36.512 "adrfam": "IPv4", 00:16:36.512 "traddr": "10.0.0.2", 00:16:36.512 "trsvcid": "4420" 00:16:36.512 }, 00:16:36.512 "peer_address": { 00:16:36.512 "trtype": "TCP", 00:16:36.512 "adrfam": "IPv4", 00:16:36.512 "traddr": "10.0.0.1", 00:16:36.512 "trsvcid": "36080" 00:16:36.512 }, 00:16:36.512 "auth": { 00:16:36.512 "state": "completed", 00:16:36.512 "digest": "sha384", 00:16:36.512 "dhgroup": "ffdhe2048" 00:16:36.512 } 00:16:36.512 } 00:16:36.512 ]' 00:16:36.512 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.512 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.512 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.512 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.512 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.512 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.512 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.512 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.771 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:36.771 15:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:37.338 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.338 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:37.338 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.338 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.338 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.338 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.338 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.338 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.596 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:37.596 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.596 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:37.596 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:37.596 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.596 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.596 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:37.596 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.596 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.596 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.596 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.596 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.596 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.855 00:16:37.855 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.855 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.855 15:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.855 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.855 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.855 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.855 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.855 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.855 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.855 { 00:16:37.855 "cntlid": 63, 00:16:37.855 "qid": 0, 00:16:37.855 "state": "enabled", 00:16:37.855 "thread": "nvmf_tgt_poll_group_000", 00:16:37.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:37.855 "listen_address": { 00:16:37.855 "trtype": "TCP", 00:16:37.855 "adrfam": "IPv4", 00:16:37.855 "traddr": "10.0.0.2", 00:16:37.855 "trsvcid": "4420" 00:16:37.855 }, 00:16:37.855 "peer_address": { 00:16:37.855 "trtype": "TCP", 00:16:37.855 "adrfam": "IPv4", 00:16:37.855 "traddr": "10.0.0.1", 00:16:37.855 "trsvcid": "36108" 00:16:37.855 }, 00:16:37.855 "auth": { 00:16:37.855 "state": "completed", 00:16:37.855 "digest": "sha384", 00:16:37.855 "dhgroup": "ffdhe2048" 00:16:37.855 } 00:16:37.855 } 00:16:37.855 ]' 00:16:37.855 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.114 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.114 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.114 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.114 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.114 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.114 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.114 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.373 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:38.373 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:38.941 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.941 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:38.941 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.941 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.941 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.941 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.941 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.941 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.941 15:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.941 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:38.941 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.941 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:38.941 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:38.941 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.941 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.941 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.941 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.941 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.941 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.941 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.941 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.941 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.200 00:16:39.200 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.200 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.200 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.459 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.459 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.459 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.459 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.459 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.459 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.459 { 00:16:39.459 "cntlid": 65, 00:16:39.459 "qid": 0, 00:16:39.459 "state": "enabled", 00:16:39.459 "thread": "nvmf_tgt_poll_group_000", 00:16:39.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:39.459 "listen_address": { 00:16:39.459 "trtype": "TCP", 00:16:39.459 "adrfam": "IPv4", 00:16:39.459 "traddr": "10.0.0.2", 00:16:39.459 "trsvcid": "4420" 00:16:39.459 }, 00:16:39.459 "peer_address": { 00:16:39.459 "trtype": "TCP", 00:16:39.459 "adrfam": "IPv4", 00:16:39.459 "traddr": "10.0.0.1", 00:16:39.459 "trsvcid": "57430" 00:16:39.459 }, 00:16:39.459 "auth": { 00:16:39.459 "state": "completed", 00:16:39.459 "digest": "sha384", 00:16:39.459 "dhgroup": "ffdhe3072" 00:16:39.459 } 00:16:39.459 } 00:16:39.459 ]' 00:16:39.459 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.459 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.459 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.718 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.718 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.718 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.718 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.718 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.977 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:39.977 15:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.545 15:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.805 00:16:40.805 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.805 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.805 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.064 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.064 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.064 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.064 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.064 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.064 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.064 { 00:16:41.064 "cntlid": 67, 00:16:41.064 "qid": 0, 00:16:41.064 "state": "enabled", 00:16:41.064 "thread": "nvmf_tgt_poll_group_000", 00:16:41.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:41.064 "listen_address": { 00:16:41.064 "trtype": "TCP", 00:16:41.064 "adrfam": "IPv4", 00:16:41.064 "traddr": "10.0.0.2", 00:16:41.064 "trsvcid": "4420" 00:16:41.064 }, 00:16:41.064 "peer_address": { 00:16:41.064 "trtype": "TCP", 00:16:41.064 "adrfam": "IPv4", 00:16:41.064 "traddr": "10.0.0.1", 00:16:41.064 "trsvcid": "57452" 00:16:41.064 }, 00:16:41.064 "auth": { 00:16:41.064 "state": "completed", 00:16:41.064 "digest": "sha384", 00:16:41.064 "dhgroup": "ffdhe3072" 00:16:41.064 } 00:16:41.064 } 00:16:41.064 ]' 00:16:41.064 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.064 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.064 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.323 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.323 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.323 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.323 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.323 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.581 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:41.581 15:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:42.149 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.149 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:42.149 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.150 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.408 00:16:42.409 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.409 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.409 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.725 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.725 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.725 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.725 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.725 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.725 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.725 { 00:16:42.725 "cntlid": 69, 00:16:42.725 "qid": 0, 00:16:42.725 "state": "enabled", 00:16:42.725 "thread": "nvmf_tgt_poll_group_000", 00:16:42.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:42.725 "listen_address": { 00:16:42.725 "trtype": "TCP", 00:16:42.725 "adrfam": "IPv4", 00:16:42.725 "traddr": "10.0.0.2", 00:16:42.725 "trsvcid": "4420" 00:16:42.725 }, 00:16:42.725 "peer_address": { 00:16:42.725 "trtype": "TCP", 00:16:42.725 "adrfam": "IPv4", 00:16:42.725 "traddr": "10.0.0.1", 00:16:42.725 "trsvcid": "57482" 00:16:42.725 }, 00:16:42.725 "auth": { 00:16:42.725 "state": "completed", 00:16:42.725 "digest": "sha384", 00:16:42.725 "dhgroup": "ffdhe3072" 00:16:42.725 } 00:16:42.725 } 00:16:42.725 ]' 00:16:42.725 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.725 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.725 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.082 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:43.082 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.082 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.082 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.082 15:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.082 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:43.082 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:43.714 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.714 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:43.714 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.714 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.714 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.714 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.714 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:43.714 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:43.714 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:43.714 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.714 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:43.714 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.714 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:43.714 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.714 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:43.973 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.973 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.973 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.973 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:43.973 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.973 15:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.973 00:16:44.232 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.232 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.232 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.232 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.232 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.232 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.232 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.232 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.232 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.232 { 00:16:44.232 "cntlid": 71, 00:16:44.232 "qid": 0, 00:16:44.232 "state": "enabled", 00:16:44.232 "thread": "nvmf_tgt_poll_group_000", 00:16:44.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:44.232 "listen_address": { 00:16:44.232 "trtype": "TCP", 00:16:44.232 "adrfam": "IPv4", 00:16:44.232 "traddr": "10.0.0.2", 00:16:44.232 "trsvcid": "4420" 00:16:44.232 }, 00:16:44.232 "peer_address": { 00:16:44.232 "trtype": "TCP", 00:16:44.232 "adrfam": "IPv4", 00:16:44.232 "traddr": "10.0.0.1", 00:16:44.232 "trsvcid": "57520" 00:16:44.232 }, 00:16:44.232 "auth": { 00:16:44.232 "state": "completed", 00:16:44.232 "digest": "sha384", 00:16:44.232 "dhgroup": "ffdhe3072" 00:16:44.232 } 00:16:44.232 } 00:16:44.232 ]' 00:16:44.232 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.491 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.491 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.491 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.491 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.491 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.491 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.491 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.749 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:44.749 15:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:45.316 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.316 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:45.316 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.316 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.316 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.316 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.316 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.317 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:45.317 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:45.317 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:45.317 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.317 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:45.317 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:45.317 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:45.317 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.317 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.317 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.317 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.575 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.575 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.575 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.575 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.834 00:16:45.834 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.834 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.834 15:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.834 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.834 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.834 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.834 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.834 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.834 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.834 { 00:16:45.834 "cntlid": 73, 00:16:45.834 "qid": 0, 00:16:45.834 "state": "enabled", 00:16:45.834 "thread": "nvmf_tgt_poll_group_000", 00:16:45.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:45.834 "listen_address": { 00:16:45.834 "trtype": "TCP", 00:16:45.834 "adrfam": "IPv4", 00:16:45.834 "traddr": "10.0.0.2", 00:16:45.834 "trsvcid": "4420" 00:16:45.834 }, 00:16:45.834 "peer_address": { 00:16:45.834 "trtype": "TCP", 00:16:45.834 "adrfam": "IPv4", 00:16:45.834 "traddr": "10.0.0.1", 00:16:45.834 "trsvcid": "57562" 00:16:45.834 }, 00:16:45.834 "auth": { 00:16:45.834 "state": "completed", 00:16:45.834 "digest": "sha384", 00:16:45.834 "dhgroup": "ffdhe4096" 00:16:45.834 } 00:16:45.834 } 00:16:45.834 ]' 00:16:45.834 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.094 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.094 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.094 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.094 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.094 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.094 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.094 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.353 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:46.353 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:46.920 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.920 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:46.920 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.920 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.920 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.920 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.920 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:46.920 15:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:46.920 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:46.920 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.920 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.920 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.920 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:46.920 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.920 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.920 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.920 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.179 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.179 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.179 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.179 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.438 00:16:47.438 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.438 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.438 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.438 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.438 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.438 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.438 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.438 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.438 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.438 { 00:16:47.438 "cntlid": 75, 00:16:47.438 "qid": 0, 00:16:47.438 "state": "enabled", 00:16:47.438 "thread": "nvmf_tgt_poll_group_000", 00:16:47.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:47.438 "listen_address": { 00:16:47.438 "trtype": "TCP", 00:16:47.438 "adrfam": "IPv4", 00:16:47.438 "traddr": "10.0.0.2", 00:16:47.438 "trsvcid": "4420" 00:16:47.438 }, 00:16:47.438 "peer_address": { 00:16:47.438 "trtype": "TCP", 00:16:47.438 "adrfam": "IPv4", 00:16:47.438 "traddr": "10.0.0.1", 00:16:47.438 "trsvcid": "57574" 00:16:47.438 }, 00:16:47.438 "auth": { 00:16:47.438 "state": "completed", 00:16:47.438 "digest": "sha384", 00:16:47.438 "dhgroup": "ffdhe4096" 00:16:47.438 } 00:16:47.438 } 00:16:47.438 ]' 00:16:47.438 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.697 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.697 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.697 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.697 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.697 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.697 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.697 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.956 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:47.956 15:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.524 15:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.783 00:16:49.043 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.043 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.043 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.043 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.043 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.043 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.043 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.043 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.043 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.043 { 00:16:49.043 "cntlid": 77, 00:16:49.043 "qid": 0, 00:16:49.043 "state": "enabled", 00:16:49.043 "thread": "nvmf_tgt_poll_group_000", 00:16:49.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:49.043 "listen_address": { 00:16:49.043 "trtype": "TCP", 00:16:49.043 "adrfam": "IPv4", 00:16:49.043 "traddr": "10.0.0.2", 00:16:49.043 "trsvcid": "4420" 00:16:49.043 }, 00:16:49.043 "peer_address": { 00:16:49.043 "trtype": "TCP", 00:16:49.043 "adrfam": "IPv4", 00:16:49.043 "traddr": "10.0.0.1", 00:16:49.043 "trsvcid": "57598" 00:16:49.043 }, 00:16:49.043 "auth": { 00:16:49.043 "state": "completed", 00:16:49.043 "digest": "sha384", 00:16:49.043 "dhgroup": "ffdhe4096" 00:16:49.043 } 00:16:49.043 } 00:16:49.043 ]' 00:16:49.043 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.302 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.302 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.302 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.302 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.302 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.302 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.302 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.561 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:49.561 15:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:50.128 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.128 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:50.128 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.128 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.128 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.128 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.128 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:50.128 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:50.387 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:50.387 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.387 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.387 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:50.387 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:50.387 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.387 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:50.387 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.387 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.387 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.387 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:50.387 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.387 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.646 00:16:50.646 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.646 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.646 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.646 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.646 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.646 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.646 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.646 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.646 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.646 { 00:16:50.646 "cntlid": 79, 00:16:50.646 "qid": 0, 00:16:50.646 "state": "enabled", 00:16:50.646 "thread": "nvmf_tgt_poll_group_000", 00:16:50.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:50.646 "listen_address": { 00:16:50.646 "trtype": "TCP", 00:16:50.646 "adrfam": "IPv4", 00:16:50.646 "traddr": "10.0.0.2", 00:16:50.646 "trsvcid": "4420" 00:16:50.646 }, 00:16:50.646 "peer_address": { 00:16:50.646 "trtype": "TCP", 00:16:50.646 "adrfam": "IPv4", 00:16:50.646 "traddr": "10.0.0.1", 00:16:50.646 "trsvcid": "41036" 00:16:50.646 }, 00:16:50.646 "auth": { 00:16:50.646 "state": "completed", 00:16:50.646 "digest": "sha384", 00:16:50.646 "dhgroup": "ffdhe4096" 00:16:50.646 } 00:16:50.646 } 00:16:50.646 ]' 00:16:50.646 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.905 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.905 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.905 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:50.905 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.905 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.905 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.905 15:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.164 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:51.164 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:51.732 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.732 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:51.732 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.732 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.732 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.733 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.733 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.733 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:51.733 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:51.992 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:51.992 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.992 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:51.992 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:51.992 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:51.992 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.992 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.992 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.992 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.992 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.992 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.992 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.992 15:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.251 00:16:52.251 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.251 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.252 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.510 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.510 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.510 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.511 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.511 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.511 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.511 { 00:16:52.511 "cntlid": 81, 00:16:52.511 "qid": 0, 00:16:52.511 "state": "enabled", 00:16:52.511 "thread": "nvmf_tgt_poll_group_000", 00:16:52.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:52.511 "listen_address": { 00:16:52.511 "trtype": "TCP", 00:16:52.511 "adrfam": "IPv4", 00:16:52.511 "traddr": "10.0.0.2", 00:16:52.511 "trsvcid": "4420" 00:16:52.511 }, 00:16:52.511 "peer_address": { 00:16:52.511 "trtype": "TCP", 00:16:52.511 "adrfam": "IPv4", 00:16:52.511 "traddr": "10.0.0.1", 00:16:52.511 "trsvcid": "41066" 00:16:52.511 }, 00:16:52.511 "auth": { 00:16:52.511 "state": "completed", 00:16:52.511 "digest": "sha384", 00:16:52.511 "dhgroup": "ffdhe6144" 00:16:52.511 } 00:16:52.511 } 00:16:52.511 ]' 00:16:52.511 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.511 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.511 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.511 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.511 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.511 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.511 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.511 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.769 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:52.769 15:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:53.337 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.337 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:53.337 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.337 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.337 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.337 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.337 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.337 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.596 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:53.596 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.596 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.596 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:53.596 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:53.596 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.596 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.596 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.596 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.596 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.596 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.596 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.596 15:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.855 00:16:53.855 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.855 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.855 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.114 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.114 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.114 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.114 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.114 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.114 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.114 { 00:16:54.114 "cntlid": 83, 00:16:54.114 "qid": 0, 00:16:54.114 "state": "enabled", 00:16:54.114 "thread": "nvmf_tgt_poll_group_000", 00:16:54.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:54.114 "listen_address": { 00:16:54.114 "trtype": "TCP", 00:16:54.114 "adrfam": "IPv4", 00:16:54.114 "traddr": "10.0.0.2", 00:16:54.114 "trsvcid": "4420" 00:16:54.114 }, 00:16:54.114 "peer_address": { 00:16:54.114 "trtype": "TCP", 00:16:54.114 "adrfam": "IPv4", 00:16:54.114 "traddr": "10.0.0.1", 00:16:54.114 "trsvcid": "41084" 00:16:54.114 }, 00:16:54.114 "auth": { 00:16:54.114 "state": "completed", 00:16:54.114 "digest": "sha384", 00:16:54.114 "dhgroup": "ffdhe6144" 00:16:54.114 } 00:16:54.114 } 00:16:54.114 ]' 00:16:54.114 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.114 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.114 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.114 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.114 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.373 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.373 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.373 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.373 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:54.373 15:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:16:54.940 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.940 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:54.940 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.940 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.940 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.940 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.940 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:54.940 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.199 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:55.199 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.199 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.199 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:55.199 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:55.199 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.199 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.199 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.199 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.199 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.199 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.199 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.199 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.458 00:16:55.717 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.717 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.717 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.717 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.717 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.717 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.717 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.717 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.717 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.717 { 00:16:55.717 "cntlid": 85, 00:16:55.717 "qid": 0, 00:16:55.717 "state": "enabled", 00:16:55.717 "thread": "nvmf_tgt_poll_group_000", 00:16:55.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:55.717 "listen_address": { 00:16:55.717 "trtype": "TCP", 00:16:55.717 "adrfam": "IPv4", 00:16:55.717 "traddr": "10.0.0.2", 00:16:55.717 "trsvcid": "4420" 00:16:55.717 }, 00:16:55.717 "peer_address": { 00:16:55.717 "trtype": "TCP", 00:16:55.717 "adrfam": "IPv4", 00:16:55.717 "traddr": "10.0.0.1", 00:16:55.717 "trsvcid": "41104" 00:16:55.717 }, 00:16:55.717 "auth": { 00:16:55.717 "state": "completed", 00:16:55.717 "digest": "sha384", 00:16:55.717 "dhgroup": "ffdhe6144" 00:16:55.717 } 00:16:55.717 } 00:16:55.717 ]' 00:16:55.717 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.976 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.976 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.976 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.976 15:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.976 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.976 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.976 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.235 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:56.235 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.802 15:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.371 00:16:57.371 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.371 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.371 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.371 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.371 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.371 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.371 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.371 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.371 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.371 { 00:16:57.371 "cntlid": 87, 00:16:57.371 "qid": 0, 00:16:57.371 "state": "enabled", 00:16:57.371 "thread": "nvmf_tgt_poll_group_000", 00:16:57.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:57.371 "listen_address": { 00:16:57.371 "trtype": "TCP", 00:16:57.371 "adrfam": "IPv4", 00:16:57.371 "traddr": "10.0.0.2", 00:16:57.371 "trsvcid": "4420" 00:16:57.371 }, 00:16:57.371 "peer_address": { 00:16:57.371 "trtype": "TCP", 00:16:57.371 "adrfam": "IPv4", 00:16:57.371 "traddr": "10.0.0.1", 00:16:57.371 "trsvcid": "41124" 00:16:57.371 }, 00:16:57.371 "auth": { 00:16:57.371 "state": "completed", 00:16:57.371 "digest": "sha384", 00:16:57.371 "dhgroup": "ffdhe6144" 00:16:57.371 } 00:16:57.371 } 00:16:57.371 ]' 00:16:57.371 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.371 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.371 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.630 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:57.630 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.630 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.630 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.630 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.889 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:57.889 15:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.457 15:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.025 00:16:59.025 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.025 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.025 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.283 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.284 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.284 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.284 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.284 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.284 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.284 { 00:16:59.284 "cntlid": 89, 00:16:59.284 "qid": 0, 00:16:59.284 "state": "enabled", 00:16:59.284 "thread": "nvmf_tgt_poll_group_000", 00:16:59.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:16:59.284 "listen_address": { 00:16:59.284 "trtype": "TCP", 00:16:59.284 "adrfam": "IPv4", 00:16:59.284 "traddr": "10.0.0.2", 00:16:59.284 "trsvcid": "4420" 00:16:59.284 }, 00:16:59.284 "peer_address": { 00:16:59.284 "trtype": "TCP", 00:16:59.284 "adrfam": "IPv4", 00:16:59.284 "traddr": "10.0.0.1", 00:16:59.284 "trsvcid": "41144" 00:16:59.284 }, 00:16:59.284 "auth": { 00:16:59.284 "state": "completed", 00:16:59.284 "digest": "sha384", 00:16:59.284 "dhgroup": "ffdhe8192" 00:16:59.284 } 00:16:59.284 } 00:16:59.284 ]' 00:16:59.284 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.284 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.284 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.284 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.284 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.284 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.284 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.284 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.543 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:16:59.543 15:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:17:00.110 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.110 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:00.110 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.110 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.110 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.110 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.110 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.110 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.369 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:00.369 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.369 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.369 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:00.369 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:00.369 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.369 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.369 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.369 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.369 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.369 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.369 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.369 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.937 00:17:00.937 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.937 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.937 15:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.937 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.937 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.937 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.937 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.937 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.937 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.937 { 00:17:00.937 "cntlid": 91, 00:17:00.937 "qid": 0, 00:17:00.937 "state": "enabled", 00:17:00.937 "thread": "nvmf_tgt_poll_group_000", 00:17:00.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:00.937 "listen_address": { 00:17:00.937 "trtype": "TCP", 00:17:00.937 "adrfam": "IPv4", 00:17:00.937 "traddr": "10.0.0.2", 00:17:00.937 "trsvcid": "4420" 00:17:00.937 }, 00:17:00.937 "peer_address": { 00:17:00.937 "trtype": "TCP", 00:17:00.937 "adrfam": "IPv4", 00:17:00.937 "traddr": "10.0.0.1", 00:17:00.937 "trsvcid": "57688" 00:17:00.937 }, 00:17:00.937 "auth": { 00:17:00.937 "state": "completed", 00:17:00.937 "digest": "sha384", 00:17:00.937 "dhgroup": "ffdhe8192" 00:17:00.937 } 00:17:00.937 } 00:17:00.937 ]' 00:17:00.937 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.196 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.455 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:17:01.455 15:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:17:02.024 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.024 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:02.024 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.024 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.024 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.024 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.024 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.024 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.024 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:02.024 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.024 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.024 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:02.024 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:02.024 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.283 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.283 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.283 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.283 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.283 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.283 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.283 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.543 00:17:02.543 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.543 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.543 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.802 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.802 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.802 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.802 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.802 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.802 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.802 { 00:17:02.802 "cntlid": 93, 00:17:02.802 "qid": 0, 00:17:02.802 "state": "enabled", 00:17:02.802 "thread": "nvmf_tgt_poll_group_000", 00:17:02.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:02.802 "listen_address": { 00:17:02.802 "trtype": "TCP", 00:17:02.802 "adrfam": "IPv4", 00:17:02.802 "traddr": "10.0.0.2", 00:17:02.802 "trsvcid": "4420" 00:17:02.802 }, 00:17:02.802 "peer_address": { 00:17:02.802 "trtype": "TCP", 00:17:02.802 "adrfam": "IPv4", 00:17:02.802 "traddr": "10.0.0.1", 00:17:02.802 "trsvcid": "57706" 00:17:02.802 }, 00:17:02.802 "auth": { 00:17:02.802 "state": "completed", 00:17:02.802 "digest": "sha384", 00:17:02.802 "dhgroup": "ffdhe8192" 00:17:02.802 } 00:17:02.802 } 00:17:02.802 ]' 00:17:02.802 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.802 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.802 15:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.802 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.802 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.061 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.061 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.061 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.061 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:17:03.061 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:17:03.629 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.629 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:03.888 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.888 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.888 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.888 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.888 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:03.888 15:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:03.888 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:03.888 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.888 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.888 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:03.888 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:03.888 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.888 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:03.888 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.888 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.888 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.888 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:03.888 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.888 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.456 00:17:04.456 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.456 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.456 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.714 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.714 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.714 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.714 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.714 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.714 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.714 { 00:17:04.714 "cntlid": 95, 00:17:04.714 "qid": 0, 00:17:04.714 "state": "enabled", 00:17:04.714 "thread": "nvmf_tgt_poll_group_000", 00:17:04.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:04.714 "listen_address": { 00:17:04.714 "trtype": "TCP", 00:17:04.714 "adrfam": "IPv4", 00:17:04.714 "traddr": "10.0.0.2", 00:17:04.714 "trsvcid": "4420" 00:17:04.714 }, 00:17:04.714 "peer_address": { 00:17:04.714 "trtype": "TCP", 00:17:04.714 "adrfam": "IPv4", 00:17:04.714 "traddr": "10.0.0.1", 00:17:04.714 "trsvcid": "57734" 00:17:04.714 }, 00:17:04.714 "auth": { 00:17:04.715 "state": "completed", 00:17:04.715 "digest": "sha384", 00:17:04.715 "dhgroup": "ffdhe8192" 00:17:04.715 } 00:17:04.715 } 00:17:04.715 ]' 00:17:04.715 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.715 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.715 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.715 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.715 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.715 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.715 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.715 15:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.973 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:04.973 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:05.541 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.541 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:05.541 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.541 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.541 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.541 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:05.541 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.541 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.541 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:05.541 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:05.800 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:05.800 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.800 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.800 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:05.800 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:05.800 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.800 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.800 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.800 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.800 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.800 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.800 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.800 15:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.059 00:17:06.059 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.059 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.059 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.317 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.317 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.317 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.317 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.317 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.317 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.317 { 00:17:06.317 "cntlid": 97, 00:17:06.317 "qid": 0, 00:17:06.317 "state": "enabled", 00:17:06.317 "thread": "nvmf_tgt_poll_group_000", 00:17:06.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:06.317 "listen_address": { 00:17:06.317 "trtype": "TCP", 00:17:06.317 "adrfam": "IPv4", 00:17:06.317 "traddr": "10.0.0.2", 00:17:06.317 "trsvcid": "4420" 00:17:06.317 }, 00:17:06.317 "peer_address": { 00:17:06.317 "trtype": "TCP", 00:17:06.317 "adrfam": "IPv4", 00:17:06.317 "traddr": "10.0.0.1", 00:17:06.317 "trsvcid": "57748" 00:17:06.317 }, 00:17:06.317 "auth": { 00:17:06.317 "state": "completed", 00:17:06.317 "digest": "sha512", 00:17:06.317 "dhgroup": "null" 00:17:06.317 } 00:17:06.317 } 00:17:06.317 ]' 00:17:06.317 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.317 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.317 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.317 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:06.317 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.317 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.317 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.317 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.575 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:17:06.575 15:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:17:07.143 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.143 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:07.143 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.143 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.143 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.143 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.143 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:07.143 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:07.401 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:07.401 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.401 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.401 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:07.401 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:07.401 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.401 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.401 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.401 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.401 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.401 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.401 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.401 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.659 00:17:07.659 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.659 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.659 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.918 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.918 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.918 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.918 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.918 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.918 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.919 { 00:17:07.919 "cntlid": 99, 00:17:07.919 "qid": 0, 00:17:07.919 "state": "enabled", 00:17:07.919 "thread": "nvmf_tgt_poll_group_000", 00:17:07.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:07.919 "listen_address": { 00:17:07.919 "trtype": "TCP", 00:17:07.919 "adrfam": "IPv4", 00:17:07.919 "traddr": "10.0.0.2", 00:17:07.919 "trsvcid": "4420" 00:17:07.919 }, 00:17:07.919 "peer_address": { 00:17:07.919 "trtype": "TCP", 00:17:07.919 "adrfam": "IPv4", 00:17:07.919 "traddr": "10.0.0.1", 00:17:07.919 "trsvcid": "57776" 00:17:07.919 }, 00:17:07.919 "auth": { 00:17:07.919 "state": "completed", 00:17:07.919 "digest": "sha512", 00:17:07.919 "dhgroup": "null" 00:17:07.919 } 00:17:07.919 } 00:17:07.919 ]' 00:17:07.919 15:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.919 15:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.919 15:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.919 15:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:07.919 15:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.919 15:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.919 15:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.919 15:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.177 15:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:17:08.177 15:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:17:08.743 15:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.743 15:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:08.743 15:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.743 15:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.743 15:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.743 15:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.743 15:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:08.743 15:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:09.002 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:09.002 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.002 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.002 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:09.002 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:09.002 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.002 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.002 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.002 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.002 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.002 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.002 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.002 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.260 00:17:09.260 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.260 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.260 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.519 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.519 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.519 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.519 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.519 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.519 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.519 { 00:17:09.519 "cntlid": 101, 00:17:09.519 "qid": 0, 00:17:09.519 "state": "enabled", 00:17:09.519 "thread": "nvmf_tgt_poll_group_000", 00:17:09.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:09.519 "listen_address": { 00:17:09.519 "trtype": "TCP", 00:17:09.519 "adrfam": "IPv4", 00:17:09.519 "traddr": "10.0.0.2", 00:17:09.519 "trsvcid": "4420" 00:17:09.519 }, 00:17:09.519 "peer_address": { 00:17:09.519 "trtype": "TCP", 00:17:09.519 "adrfam": "IPv4", 00:17:09.519 "traddr": "10.0.0.1", 00:17:09.519 "trsvcid": "36912" 00:17:09.519 }, 00:17:09.519 "auth": { 00:17:09.519 "state": "completed", 00:17:09.519 "digest": "sha512", 00:17:09.519 "dhgroup": "null" 00:17:09.519 } 00:17:09.519 } 00:17:09.519 ]' 00:17:09.519 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.519 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.519 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.519 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:09.519 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.519 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.519 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.519 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.777 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:17:09.777 15:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:17:10.344 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.344 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:10.344 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.344 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.344 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.344 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.344 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:10.344 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:10.603 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:10.603 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.603 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.603 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:10.603 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:10.603 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.603 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:10.603 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.603 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.603 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.603 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:10.603 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.603 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.862 00:17:10.862 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.862 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.862 15:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.121 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.121 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.121 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.121 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.121 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.121 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.121 { 00:17:11.121 "cntlid": 103, 00:17:11.121 "qid": 0, 00:17:11.121 "state": "enabled", 00:17:11.121 "thread": "nvmf_tgt_poll_group_000", 00:17:11.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:11.121 "listen_address": { 00:17:11.121 "trtype": "TCP", 00:17:11.121 "adrfam": "IPv4", 00:17:11.122 "traddr": "10.0.0.2", 00:17:11.122 "trsvcid": "4420" 00:17:11.122 }, 00:17:11.122 "peer_address": { 00:17:11.122 "trtype": "TCP", 00:17:11.122 "adrfam": "IPv4", 00:17:11.122 "traddr": "10.0.0.1", 00:17:11.122 "trsvcid": "36942" 00:17:11.122 }, 00:17:11.122 "auth": { 00:17:11.122 "state": "completed", 00:17:11.122 "digest": "sha512", 00:17:11.122 "dhgroup": "null" 00:17:11.122 } 00:17:11.122 } 00:17:11.122 ]' 00:17:11.122 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.122 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.122 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.122 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:11.122 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.122 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.122 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.122 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.380 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:11.380 15:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:11.949 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.949 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:11.949 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.949 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.949 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.949 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.949 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.949 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:11.949 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:12.207 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:12.207 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.207 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.207 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:12.207 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:12.207 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.207 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.207 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.207 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.207 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.207 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.207 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.207 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.465 00:17:12.466 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.466 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.466 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.724 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.724 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.724 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.724 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.724 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.724 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.724 { 00:17:12.724 "cntlid": 105, 00:17:12.724 "qid": 0, 00:17:12.724 "state": "enabled", 00:17:12.724 "thread": "nvmf_tgt_poll_group_000", 00:17:12.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:12.724 "listen_address": { 00:17:12.725 "trtype": "TCP", 00:17:12.725 "adrfam": "IPv4", 00:17:12.725 "traddr": "10.0.0.2", 00:17:12.725 "trsvcid": "4420" 00:17:12.725 }, 00:17:12.725 "peer_address": { 00:17:12.725 "trtype": "TCP", 00:17:12.725 "adrfam": "IPv4", 00:17:12.725 "traddr": "10.0.0.1", 00:17:12.725 "trsvcid": "36966" 00:17:12.725 }, 00:17:12.725 "auth": { 00:17:12.725 "state": "completed", 00:17:12.725 "digest": "sha512", 00:17:12.725 "dhgroup": "ffdhe2048" 00:17:12.725 } 00:17:12.725 } 00:17:12.725 ]' 00:17:12.725 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.725 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.725 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.725 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:12.725 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.725 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.725 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.725 15:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.984 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:17:12.984 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:17:13.551 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.551 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:13.551 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.551 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.552 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.552 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.552 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:13.552 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:13.811 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:13.811 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.811 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.811 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:13.811 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:13.811 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.811 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.811 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.811 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.811 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.811 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.811 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.811 15:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.070 00:17:14.070 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.070 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.070 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.070 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.070 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.070 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.070 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.070 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.070 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.070 { 00:17:14.070 "cntlid": 107, 00:17:14.070 "qid": 0, 00:17:14.070 "state": "enabled", 00:17:14.070 "thread": "nvmf_tgt_poll_group_000", 00:17:14.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:14.070 "listen_address": { 00:17:14.070 "trtype": "TCP", 00:17:14.070 "adrfam": "IPv4", 00:17:14.070 "traddr": "10.0.0.2", 00:17:14.070 "trsvcid": "4420" 00:17:14.070 }, 00:17:14.070 "peer_address": { 00:17:14.070 "trtype": "TCP", 00:17:14.070 "adrfam": "IPv4", 00:17:14.070 "traddr": "10.0.0.1", 00:17:14.070 "trsvcid": "36996" 00:17:14.070 }, 00:17:14.070 "auth": { 00:17:14.070 "state": "completed", 00:17:14.070 "digest": "sha512", 00:17:14.070 "dhgroup": "ffdhe2048" 00:17:14.070 } 00:17:14.070 } 00:17:14.070 ]' 00:17:14.070 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.329 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.329 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.329 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.329 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.329 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.329 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.329 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.588 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:17:14.588 15:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:17:15.155 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.155 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:15.155 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.155 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.155 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.155 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.155 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.155 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.155 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:15.155 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.155 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:15.155 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:15.155 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:15.155 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.155 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.155 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.155 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.414 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.414 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.414 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.414 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.414 00:17:15.673 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.673 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.673 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.673 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.673 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.673 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.673 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.673 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.673 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.673 { 00:17:15.673 "cntlid": 109, 00:17:15.673 "qid": 0, 00:17:15.673 "state": "enabled", 00:17:15.673 "thread": "nvmf_tgt_poll_group_000", 00:17:15.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:15.673 "listen_address": { 00:17:15.673 "trtype": "TCP", 00:17:15.673 "adrfam": "IPv4", 00:17:15.673 "traddr": "10.0.0.2", 00:17:15.673 "trsvcid": "4420" 00:17:15.673 }, 00:17:15.673 "peer_address": { 00:17:15.673 "trtype": "TCP", 00:17:15.673 "adrfam": "IPv4", 00:17:15.673 "traddr": "10.0.0.1", 00:17:15.673 "trsvcid": "37014" 00:17:15.673 }, 00:17:15.673 "auth": { 00:17:15.673 "state": "completed", 00:17:15.673 "digest": "sha512", 00:17:15.673 "dhgroup": "ffdhe2048" 00:17:15.673 } 00:17:15.673 } 00:17:15.673 ]' 00:17:15.673 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.673 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.932 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.932 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:15.932 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.932 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.932 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.932 15:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.191 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:17:16.191 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.759 15:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.018 00:17:17.018 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.018 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.018 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.277 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.277 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.277 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.277 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.277 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.277 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.277 { 00:17:17.277 "cntlid": 111, 00:17:17.277 "qid": 0, 00:17:17.277 "state": "enabled", 00:17:17.277 "thread": "nvmf_tgt_poll_group_000", 00:17:17.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:17.277 "listen_address": { 00:17:17.277 "trtype": "TCP", 00:17:17.277 "adrfam": "IPv4", 00:17:17.277 "traddr": "10.0.0.2", 00:17:17.277 "trsvcid": "4420" 00:17:17.277 }, 00:17:17.277 "peer_address": { 00:17:17.277 "trtype": "TCP", 00:17:17.277 "adrfam": "IPv4", 00:17:17.277 "traddr": "10.0.0.1", 00:17:17.277 "trsvcid": "37054" 00:17:17.277 }, 00:17:17.277 "auth": { 00:17:17.277 "state": "completed", 00:17:17.277 "digest": "sha512", 00:17:17.277 "dhgroup": "ffdhe2048" 00:17:17.277 } 00:17:17.277 } 00:17:17.277 ]' 00:17:17.277 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.277 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.277 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.536 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:17.536 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.536 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.536 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.536 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.536 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:17.536 15:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:18.103 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.362 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.621 00:17:18.621 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.621 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.621 15:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.880 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.880 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.880 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.880 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.880 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.880 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.880 { 00:17:18.880 "cntlid": 113, 00:17:18.880 "qid": 0, 00:17:18.880 "state": "enabled", 00:17:18.880 "thread": "nvmf_tgt_poll_group_000", 00:17:18.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:18.880 "listen_address": { 00:17:18.880 "trtype": "TCP", 00:17:18.880 "adrfam": "IPv4", 00:17:18.880 "traddr": "10.0.0.2", 00:17:18.880 "trsvcid": "4420" 00:17:18.880 }, 00:17:18.880 "peer_address": { 00:17:18.880 "trtype": "TCP", 00:17:18.880 "adrfam": "IPv4", 00:17:18.880 "traddr": "10.0.0.1", 00:17:18.880 "trsvcid": "37094" 00:17:18.880 }, 00:17:18.880 "auth": { 00:17:18.880 "state": "completed", 00:17:18.880 "digest": "sha512", 00:17:18.880 "dhgroup": "ffdhe3072" 00:17:18.880 } 00:17:18.880 } 00:17:18.880 ]' 00:17:18.880 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.880 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.880 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.139 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.139 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.139 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.139 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.139 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.139 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:17:19.139 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:17:19.706 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.965 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:19.965 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.965 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.965 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.965 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.965 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:19.965 15:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:19.965 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:19.965 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.965 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:19.965 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:19.965 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:19.965 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.965 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.965 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.965 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.965 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.965 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.965 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.966 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.227 00:17:20.227 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.227 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.227 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.538 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.538 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.538 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.538 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.538 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.538 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.538 { 00:17:20.538 "cntlid": 115, 00:17:20.538 "qid": 0, 00:17:20.538 "state": "enabled", 00:17:20.538 "thread": "nvmf_tgt_poll_group_000", 00:17:20.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:20.538 "listen_address": { 00:17:20.538 "trtype": "TCP", 00:17:20.538 "adrfam": "IPv4", 00:17:20.538 "traddr": "10.0.0.2", 00:17:20.538 "trsvcid": "4420" 00:17:20.538 }, 00:17:20.538 "peer_address": { 00:17:20.538 "trtype": "TCP", 00:17:20.538 "adrfam": "IPv4", 00:17:20.538 "traddr": "10.0.0.1", 00:17:20.538 "trsvcid": "40722" 00:17:20.538 }, 00:17:20.538 "auth": { 00:17:20.538 "state": "completed", 00:17:20.538 "digest": "sha512", 00:17:20.538 "dhgroup": "ffdhe3072" 00:17:20.538 } 00:17:20.538 } 00:17:20.538 ]' 00:17:20.538 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.538 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.538 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.538 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.538 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.838 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.838 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.838 15:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.838 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:17:20.838 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:17:21.406 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.406 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:21.406 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.406 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.406 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.406 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.406 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:21.406 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:21.665 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:21.665 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.665 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.665 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:21.665 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.665 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.665 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.665 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.665 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.665 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.665 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.665 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.665 15:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.924 00:17:21.924 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.924 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.924 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.183 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.183 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.183 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.183 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.183 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.183 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.183 { 00:17:22.183 "cntlid": 117, 00:17:22.183 "qid": 0, 00:17:22.183 "state": "enabled", 00:17:22.183 "thread": "nvmf_tgt_poll_group_000", 00:17:22.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:22.183 "listen_address": { 00:17:22.183 "trtype": "TCP", 00:17:22.183 "adrfam": "IPv4", 00:17:22.183 "traddr": "10.0.0.2", 00:17:22.183 "trsvcid": "4420" 00:17:22.183 }, 00:17:22.183 "peer_address": { 00:17:22.183 "trtype": "TCP", 00:17:22.183 "adrfam": "IPv4", 00:17:22.183 "traddr": "10.0.0.1", 00:17:22.183 "trsvcid": "40754" 00:17:22.183 }, 00:17:22.183 "auth": { 00:17:22.183 "state": "completed", 00:17:22.183 "digest": "sha512", 00:17:22.183 "dhgroup": "ffdhe3072" 00:17:22.183 } 00:17:22.183 } 00:17:22.183 ]' 00:17:22.183 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.183 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.183 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.183 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:22.183 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.183 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.183 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.183 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.442 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:17:22.442 15:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:17:23.009 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.009 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:23.009 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.009 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.009 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.010 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.010 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:23.010 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:23.268 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:23.268 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.268 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:23.268 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:23.268 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.268 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.268 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:23.269 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.269 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.269 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.269 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.269 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.269 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.527 00:17:23.527 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.527 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.528 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.787 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.787 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.787 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.787 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.787 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.787 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.787 { 00:17:23.787 "cntlid": 119, 00:17:23.787 "qid": 0, 00:17:23.787 "state": "enabled", 00:17:23.787 "thread": "nvmf_tgt_poll_group_000", 00:17:23.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:23.787 "listen_address": { 00:17:23.787 "trtype": "TCP", 00:17:23.787 "adrfam": "IPv4", 00:17:23.787 "traddr": "10.0.0.2", 00:17:23.787 "trsvcid": "4420" 00:17:23.787 }, 00:17:23.787 "peer_address": { 00:17:23.787 "trtype": "TCP", 00:17:23.787 "adrfam": "IPv4", 00:17:23.787 "traddr": "10.0.0.1", 00:17:23.787 "trsvcid": "40786" 00:17:23.787 }, 00:17:23.787 "auth": { 00:17:23.787 "state": "completed", 00:17:23.787 "digest": "sha512", 00:17:23.787 "dhgroup": "ffdhe3072" 00:17:23.787 } 00:17:23.787 } 00:17:23.787 ]' 00:17:23.787 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.787 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.787 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.787 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:23.787 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.787 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.787 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.787 15:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.046 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:24.046 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:24.613 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.613 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:24.613 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.613 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.613 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.613 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.613 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.613 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:24.613 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:24.871 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:24.871 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.871 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.871 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:24.871 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:24.871 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.871 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.871 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.871 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.871 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.871 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.871 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.871 15:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.130 00:17:25.130 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.130 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.130 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.389 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.389 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.389 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.389 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.389 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.389 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.389 { 00:17:25.389 "cntlid": 121, 00:17:25.389 "qid": 0, 00:17:25.389 "state": "enabled", 00:17:25.389 "thread": "nvmf_tgt_poll_group_000", 00:17:25.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:25.389 "listen_address": { 00:17:25.389 "trtype": "TCP", 00:17:25.389 "adrfam": "IPv4", 00:17:25.389 "traddr": "10.0.0.2", 00:17:25.389 "trsvcid": "4420" 00:17:25.389 }, 00:17:25.389 "peer_address": { 00:17:25.389 "trtype": "TCP", 00:17:25.389 "adrfam": "IPv4", 00:17:25.389 "traddr": "10.0.0.1", 00:17:25.389 "trsvcid": "40808" 00:17:25.389 }, 00:17:25.389 "auth": { 00:17:25.389 "state": "completed", 00:17:25.389 "digest": "sha512", 00:17:25.389 "dhgroup": "ffdhe4096" 00:17:25.389 } 00:17:25.389 } 00:17:25.389 ]' 00:17:25.389 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.389 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.389 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.389 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.389 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.389 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.389 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.389 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.648 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:17:25.648 15:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:17:26.215 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.215 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:26.215 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.215 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.215 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.215 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.215 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.215 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.473 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:26.474 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.474 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.474 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:26.474 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:26.474 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.474 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.474 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.474 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.474 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.474 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.474 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.474 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.732 00:17:26.732 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.732 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.733 15:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.991 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.991 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.991 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.991 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.991 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.991 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.991 { 00:17:26.991 "cntlid": 123, 00:17:26.991 "qid": 0, 00:17:26.991 "state": "enabled", 00:17:26.991 "thread": "nvmf_tgt_poll_group_000", 00:17:26.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:26.991 "listen_address": { 00:17:26.991 "trtype": "TCP", 00:17:26.991 "adrfam": "IPv4", 00:17:26.991 "traddr": "10.0.0.2", 00:17:26.991 "trsvcid": "4420" 00:17:26.991 }, 00:17:26.991 "peer_address": { 00:17:26.991 "trtype": "TCP", 00:17:26.991 "adrfam": "IPv4", 00:17:26.991 "traddr": "10.0.0.1", 00:17:26.991 "trsvcid": "40834" 00:17:26.991 }, 00:17:26.991 "auth": { 00:17:26.991 "state": "completed", 00:17:26.991 "digest": "sha512", 00:17:26.991 "dhgroup": "ffdhe4096" 00:17:26.991 } 00:17:26.991 } 00:17:26.991 ]' 00:17:26.991 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.991 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.991 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.991 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:26.991 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.992 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.992 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.992 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.250 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:17:27.250 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:17:27.818 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.818 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:27.818 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.818 15:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.818 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.818 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.818 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:27.818 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:28.077 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:28.077 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.077 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:28.077 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:28.077 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.077 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.077 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.077 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.077 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.077 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.077 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.078 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.078 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.336 00:17:28.336 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.336 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.336 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.595 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.595 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.595 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.595 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.595 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.595 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.595 { 00:17:28.595 "cntlid": 125, 00:17:28.595 "qid": 0, 00:17:28.595 "state": "enabled", 00:17:28.595 "thread": "nvmf_tgt_poll_group_000", 00:17:28.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:28.595 "listen_address": { 00:17:28.595 "trtype": "TCP", 00:17:28.595 "adrfam": "IPv4", 00:17:28.595 "traddr": "10.0.0.2", 00:17:28.595 "trsvcid": "4420" 00:17:28.595 }, 00:17:28.595 "peer_address": { 00:17:28.595 "trtype": "TCP", 00:17:28.595 "adrfam": "IPv4", 00:17:28.595 "traddr": "10.0.0.1", 00:17:28.595 "trsvcid": "40860" 00:17:28.595 }, 00:17:28.595 "auth": { 00:17:28.595 "state": "completed", 00:17:28.595 "digest": "sha512", 00:17:28.595 "dhgroup": "ffdhe4096" 00:17:28.595 } 00:17:28.595 } 00:17:28.595 ]' 00:17:28.595 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.595 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.595 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.595 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:28.595 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.595 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.595 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.595 15:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.854 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:17:28.854 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:17:29.422 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.422 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:29.422 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.422 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.422 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.422 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.422 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:29.422 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:29.681 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:29.681 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.681 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.681 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:29.681 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:29.681 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.681 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:29.681 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.681 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.681 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.681 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:29.681 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.681 15:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.940 00:17:29.940 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.940 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.940 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.198 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.198 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.198 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.198 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.198 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.198 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.198 { 00:17:30.198 "cntlid": 127, 00:17:30.198 "qid": 0, 00:17:30.198 "state": "enabled", 00:17:30.198 "thread": "nvmf_tgt_poll_group_000", 00:17:30.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:30.198 "listen_address": { 00:17:30.199 "trtype": "TCP", 00:17:30.199 "adrfam": "IPv4", 00:17:30.199 "traddr": "10.0.0.2", 00:17:30.199 "trsvcid": "4420" 00:17:30.199 }, 00:17:30.199 "peer_address": { 00:17:30.199 "trtype": "TCP", 00:17:30.199 "adrfam": "IPv4", 00:17:30.199 "traddr": "10.0.0.1", 00:17:30.199 "trsvcid": "54738" 00:17:30.199 }, 00:17:30.199 "auth": { 00:17:30.199 "state": "completed", 00:17:30.199 "digest": "sha512", 00:17:30.199 "dhgroup": "ffdhe4096" 00:17:30.199 } 00:17:30.199 } 00:17:30.199 ]' 00:17:30.199 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.199 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.199 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.199 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:30.199 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.199 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.199 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.199 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.457 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:30.457 15:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:31.024 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.024 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:31.024 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.024 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.024 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.024 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.024 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.024 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.024 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.283 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:31.283 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.283 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.283 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:31.283 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.283 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.283 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.283 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.283 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.283 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.283 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.283 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.283 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.851 00:17:31.851 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.851 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.851 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.851 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.851 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.851 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.851 15:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.851 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.851 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.851 { 00:17:31.851 "cntlid": 129, 00:17:31.851 "qid": 0, 00:17:31.851 "state": "enabled", 00:17:31.851 "thread": "nvmf_tgt_poll_group_000", 00:17:31.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:31.851 "listen_address": { 00:17:31.851 "trtype": "TCP", 00:17:31.851 "adrfam": "IPv4", 00:17:31.851 "traddr": "10.0.0.2", 00:17:31.851 "trsvcid": "4420" 00:17:31.851 }, 00:17:31.851 "peer_address": { 00:17:31.851 "trtype": "TCP", 00:17:31.851 "adrfam": "IPv4", 00:17:31.851 "traddr": "10.0.0.1", 00:17:31.851 "trsvcid": "54758" 00:17:31.851 }, 00:17:31.851 "auth": { 00:17:31.851 "state": "completed", 00:17:31.851 "digest": "sha512", 00:17:31.851 "dhgroup": "ffdhe6144" 00:17:31.851 } 00:17:31.851 } 00:17:31.851 ]' 00:17:31.851 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.851 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.851 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.109 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.109 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.109 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.109 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.109 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.368 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:17:32.368 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:17:32.936 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.936 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:32.936 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.936 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.936 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.936 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.936 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.936 15:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.936 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:32.936 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.936 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.936 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:32.936 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:32.936 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.936 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.936 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.936 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.936 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.936 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.936 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.936 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.504 00:17:33.504 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.504 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.504 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.504 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.504 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.504 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.504 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.504 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.504 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.504 { 00:17:33.504 "cntlid": 131, 00:17:33.504 "qid": 0, 00:17:33.504 "state": "enabled", 00:17:33.504 "thread": "nvmf_tgt_poll_group_000", 00:17:33.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:33.504 "listen_address": { 00:17:33.504 "trtype": "TCP", 00:17:33.504 "adrfam": "IPv4", 00:17:33.504 "traddr": "10.0.0.2", 00:17:33.504 "trsvcid": "4420" 00:17:33.504 }, 00:17:33.504 "peer_address": { 00:17:33.504 "trtype": "TCP", 00:17:33.504 "adrfam": "IPv4", 00:17:33.504 "traddr": "10.0.0.1", 00:17:33.504 "trsvcid": "54792" 00:17:33.504 }, 00:17:33.504 "auth": { 00:17:33.504 "state": "completed", 00:17:33.504 "digest": "sha512", 00:17:33.504 "dhgroup": "ffdhe6144" 00:17:33.504 } 00:17:33.504 } 00:17:33.504 ]' 00:17:33.504 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.504 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.504 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.763 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.763 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.763 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.763 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.763 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.763 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:17:33.763 15:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:17:34.330 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.330 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:34.330 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.330 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.330 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.330 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.330 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:34.330 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:34.589 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:34.589 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.589 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.589 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:34.589 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:34.589 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.589 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.589 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.589 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.589 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.589 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.589 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.589 15:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.157 00:17:35.157 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.157 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.157 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.157 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.157 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.157 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.157 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.157 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.157 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.157 { 00:17:35.157 "cntlid": 133, 00:17:35.157 "qid": 0, 00:17:35.157 "state": "enabled", 00:17:35.157 "thread": "nvmf_tgt_poll_group_000", 00:17:35.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:35.157 "listen_address": { 00:17:35.157 "trtype": "TCP", 00:17:35.157 "adrfam": "IPv4", 00:17:35.157 "traddr": "10.0.0.2", 00:17:35.157 "trsvcid": "4420" 00:17:35.157 }, 00:17:35.157 "peer_address": { 00:17:35.157 "trtype": "TCP", 00:17:35.157 "adrfam": "IPv4", 00:17:35.157 "traddr": "10.0.0.1", 00:17:35.157 "trsvcid": "54818" 00:17:35.157 }, 00:17:35.157 "auth": { 00:17:35.157 "state": "completed", 00:17:35.157 "digest": "sha512", 00:17:35.157 "dhgroup": "ffdhe6144" 00:17:35.157 } 00:17:35.157 } 00:17:35.157 ]' 00:17:35.157 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.157 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.416 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.416 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:35.416 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.416 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.416 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.416 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.674 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:17:35.674 15:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:17:36.242 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.242 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:36.242 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.242 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.242 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.242 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.242 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.242 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.242 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:36.242 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.242 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.242 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:36.242 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:36.242 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.242 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:36.242 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.242 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.501 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.501 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.501 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.501 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.760 00:17:36.760 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.760 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.760 15:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.019 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.019 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.019 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.019 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.019 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.019 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.019 { 00:17:37.019 "cntlid": 135, 00:17:37.019 "qid": 0, 00:17:37.019 "state": "enabled", 00:17:37.019 "thread": "nvmf_tgt_poll_group_000", 00:17:37.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:37.019 "listen_address": { 00:17:37.019 "trtype": "TCP", 00:17:37.019 "adrfam": "IPv4", 00:17:37.019 "traddr": "10.0.0.2", 00:17:37.019 "trsvcid": "4420" 00:17:37.019 }, 00:17:37.019 "peer_address": { 00:17:37.019 "trtype": "TCP", 00:17:37.019 "adrfam": "IPv4", 00:17:37.019 "traddr": "10.0.0.1", 00:17:37.019 "trsvcid": "54846" 00:17:37.019 }, 00:17:37.019 "auth": { 00:17:37.019 "state": "completed", 00:17:37.019 "digest": "sha512", 00:17:37.019 "dhgroup": "ffdhe6144" 00:17:37.019 } 00:17:37.019 } 00:17:37.019 ]' 00:17:37.019 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.019 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.019 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.019 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:37.019 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.019 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.019 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.019 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.277 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:37.277 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:37.845 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.845 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:37.845 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.845 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.845 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.845 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.845 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.845 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.845 15:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:38.104 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:38.104 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.104 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.104 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:38.104 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:38.104 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.104 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.104 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.104 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.104 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.104 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.104 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.104 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.672 00:17:38.672 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.672 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.672 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.672 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.672 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.672 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.672 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.672 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.672 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.672 { 00:17:38.672 "cntlid": 137, 00:17:38.672 "qid": 0, 00:17:38.672 "state": "enabled", 00:17:38.672 "thread": "nvmf_tgt_poll_group_000", 00:17:38.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:38.672 "listen_address": { 00:17:38.672 "trtype": "TCP", 00:17:38.672 "adrfam": "IPv4", 00:17:38.672 "traddr": "10.0.0.2", 00:17:38.672 "trsvcid": "4420" 00:17:38.672 }, 00:17:38.672 "peer_address": { 00:17:38.672 "trtype": "TCP", 00:17:38.672 "adrfam": "IPv4", 00:17:38.672 "traddr": "10.0.0.1", 00:17:38.672 "trsvcid": "54888" 00:17:38.672 }, 00:17:38.672 "auth": { 00:17:38.672 "state": "completed", 00:17:38.672 "digest": "sha512", 00:17:38.672 "dhgroup": "ffdhe8192" 00:17:38.672 } 00:17:38.672 } 00:17:38.672 ]' 00:17:38.672 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.932 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.932 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.932 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.932 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.932 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.932 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.932 15:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.191 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:17:39.191 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.759 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.019 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.019 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.019 15:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.278 00:17:40.278 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.278 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.278 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.537 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.537 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.537 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.537 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.537 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.537 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.537 { 00:17:40.537 "cntlid": 139, 00:17:40.537 "qid": 0, 00:17:40.537 "state": "enabled", 00:17:40.537 "thread": "nvmf_tgt_poll_group_000", 00:17:40.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:40.537 "listen_address": { 00:17:40.537 "trtype": "TCP", 00:17:40.537 "adrfam": "IPv4", 00:17:40.537 "traddr": "10.0.0.2", 00:17:40.537 "trsvcid": "4420" 00:17:40.537 }, 00:17:40.537 "peer_address": { 00:17:40.537 "trtype": "TCP", 00:17:40.537 "adrfam": "IPv4", 00:17:40.537 "traddr": "10.0.0.1", 00:17:40.537 "trsvcid": "46744" 00:17:40.537 }, 00:17:40.537 "auth": { 00:17:40.537 "state": "completed", 00:17:40.537 "digest": "sha512", 00:17:40.537 "dhgroup": "ffdhe8192" 00:17:40.537 } 00:17:40.537 } 00:17:40.537 ]' 00:17:40.537 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.537 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.537 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.537 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.537 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.797 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.797 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.797 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.797 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:17:40.797 15:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: --dhchap-ctrl-secret DHHC-1:02:MjA5NDYwMzgxMGY3ZDNjM2I1MTgyMzg3YzZkODlkYTNlNjhmZGU2OWVkYTUwZDlk8m/7YA==: 00:17:41.365 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.365 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:41.365 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.365 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.365 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.365 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.365 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:41.365 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:41.624 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:41.624 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.624 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.624 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:41.624 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:41.624 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.624 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.624 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.624 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.624 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.624 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.624 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.625 15:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.193 00:17:42.193 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.193 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.193 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.452 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.452 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.452 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.452 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.452 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.452 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.452 { 00:17:42.452 "cntlid": 141, 00:17:42.452 "qid": 0, 00:17:42.452 "state": "enabled", 00:17:42.452 "thread": "nvmf_tgt_poll_group_000", 00:17:42.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:42.452 "listen_address": { 00:17:42.452 "trtype": "TCP", 00:17:42.452 "adrfam": "IPv4", 00:17:42.452 "traddr": "10.0.0.2", 00:17:42.452 "trsvcid": "4420" 00:17:42.452 }, 00:17:42.452 "peer_address": { 00:17:42.452 "trtype": "TCP", 00:17:42.452 "adrfam": "IPv4", 00:17:42.452 "traddr": "10.0.0.1", 00:17:42.452 "trsvcid": "46768" 00:17:42.452 }, 00:17:42.452 "auth": { 00:17:42.452 "state": "completed", 00:17:42.452 "digest": "sha512", 00:17:42.452 "dhgroup": "ffdhe8192" 00:17:42.452 } 00:17:42.452 } 00:17:42.452 ]' 00:17:42.452 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.452 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.452 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.452 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:42.452 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.452 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.452 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.452 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.711 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:17:42.711 15:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:01:MGE2MTY0Y2E5NWJiZmYxNTNkOTg1OTliMDgzNjIyMWLA/hLZ: 00:17:43.280 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.280 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:43.280 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.280 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.280 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.280 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.280 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:43.280 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:43.539 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:43.539 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.539 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.539 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:43.539 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:43.539 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.539 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:43.539 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.539 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.539 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.539 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.539 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.539 15:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.108 00:17:44.108 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.108 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.108 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.108 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.108 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.108 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.108 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.108 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.108 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.108 { 00:17:44.108 "cntlid": 143, 00:17:44.108 "qid": 0, 00:17:44.108 "state": "enabled", 00:17:44.108 "thread": "nvmf_tgt_poll_group_000", 00:17:44.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:44.108 "listen_address": { 00:17:44.108 "trtype": "TCP", 00:17:44.108 "adrfam": "IPv4", 00:17:44.108 "traddr": "10.0.0.2", 00:17:44.108 "trsvcid": "4420" 00:17:44.108 }, 00:17:44.108 "peer_address": { 00:17:44.108 "trtype": "TCP", 00:17:44.108 "adrfam": "IPv4", 00:17:44.108 "traddr": "10.0.0.1", 00:17:44.108 "trsvcid": "46804" 00:17:44.108 }, 00:17:44.108 "auth": { 00:17:44.108 "state": "completed", 00:17:44.108 "digest": "sha512", 00:17:44.108 "dhgroup": "ffdhe8192" 00:17:44.108 } 00:17:44.108 } 00:17:44.108 ]' 00:17:44.108 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.108 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.108 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.367 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.367 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.367 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.367 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.367 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.626 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:44.626 15:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.195 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.454 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.454 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.454 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.454 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.713 00:17:45.713 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.713 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.713 15:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.972 15:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.972 15:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.972 15:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.972 15:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.972 15:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.972 15:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.972 { 00:17:45.972 "cntlid": 145, 00:17:45.972 "qid": 0, 00:17:45.972 "state": "enabled", 00:17:45.972 "thread": "nvmf_tgt_poll_group_000", 00:17:45.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:45.972 "listen_address": { 00:17:45.972 "trtype": "TCP", 00:17:45.972 "adrfam": "IPv4", 00:17:45.972 "traddr": "10.0.0.2", 00:17:45.972 "trsvcid": "4420" 00:17:45.972 }, 00:17:45.972 "peer_address": { 00:17:45.972 "trtype": "TCP", 00:17:45.972 "adrfam": "IPv4", 00:17:45.972 "traddr": "10.0.0.1", 00:17:45.972 "trsvcid": "46844" 00:17:45.972 }, 00:17:45.972 "auth": { 00:17:45.972 "state": "completed", 00:17:45.972 "digest": "sha512", 00:17:45.972 "dhgroup": "ffdhe8192" 00:17:45.972 } 00:17:45.972 } 00:17:45.972 ]' 00:17:45.972 15:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.972 15:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.972 15:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.231 15:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.231 15:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.231 15:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.231 15:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.231 15:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.231 15:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:17:46.231 15:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OTIwMzM2MDNjNmMzMzA3ZWUxZjI1ZTQxOTNmMjYwNjU1ZjFhMTQ0NGRjYTBkZGI4PnfNxA==: --dhchap-ctrl-secret DHHC-1:03:YTRmYjgwNWYyOTU5YWY4NzhjZTNkZDQ5OGNhZTRhZmY2MmQ2ZjdiNzNiZjIxMGJjYjQxMDRlMzc2NzdmYzdmMqd/DZQ=: 00:17:46.800 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:47.059 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:47.319 request: 00:17:47.319 { 00:17:47.319 "name": "nvme0", 00:17:47.319 "trtype": "tcp", 00:17:47.319 "traddr": "10.0.0.2", 00:17:47.319 "adrfam": "ipv4", 00:17:47.319 "trsvcid": "4420", 00:17:47.319 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:47.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:47.319 "prchk_reftag": false, 00:17:47.319 "prchk_guard": false, 00:17:47.319 "hdgst": false, 00:17:47.319 "ddgst": false, 00:17:47.319 "dhchap_key": "key2", 00:17:47.319 "allow_unrecognized_csi": false, 00:17:47.319 "method": "bdev_nvme_attach_controller", 00:17:47.319 "req_id": 1 00:17:47.319 } 00:17:47.319 Got JSON-RPC error response 00:17:47.319 response: 00:17:47.319 { 00:17:47.319 "code": -5, 00:17:47.319 "message": "Input/output error" 00:17:47.319 } 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.319 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.889 request: 00:17:47.889 { 00:17:47.889 "name": "nvme0", 00:17:47.889 "trtype": "tcp", 00:17:47.889 "traddr": "10.0.0.2", 00:17:47.889 "adrfam": "ipv4", 00:17:47.889 "trsvcid": "4420", 00:17:47.889 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:47.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:47.889 "prchk_reftag": false, 00:17:47.889 "prchk_guard": false, 00:17:47.889 "hdgst": false, 00:17:47.889 "ddgst": false, 00:17:47.889 "dhchap_key": "key1", 00:17:47.889 "dhchap_ctrlr_key": "ckey2", 00:17:47.889 "allow_unrecognized_csi": false, 00:17:47.889 "method": "bdev_nvme_attach_controller", 00:17:47.889 "req_id": 1 00:17:47.889 } 00:17:47.889 Got JSON-RPC error response 00:17:47.889 response: 00:17:47.889 { 00:17:47.889 "code": -5, 00:17:47.889 "message": "Input/output error" 00:17:47.889 } 00:17:47.889 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:47.889 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:47.889 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:47.889 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:47.889 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:47.889 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.889 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.889 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.889 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:17:47.889 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.889 15:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.889 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.889 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.889 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:47.889 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.889 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:47.889 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.889 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:47.889 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.889 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.889 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.889 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.457 request: 00:17:48.457 { 00:17:48.457 "name": "nvme0", 00:17:48.457 "trtype": "tcp", 00:17:48.457 "traddr": "10.0.0.2", 00:17:48.457 "adrfam": "ipv4", 00:17:48.457 "trsvcid": "4420", 00:17:48.457 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:48.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:48.457 "prchk_reftag": false, 00:17:48.457 "prchk_guard": false, 00:17:48.457 "hdgst": false, 00:17:48.457 "ddgst": false, 00:17:48.457 "dhchap_key": "key1", 00:17:48.457 "dhchap_ctrlr_key": "ckey1", 00:17:48.457 "allow_unrecognized_csi": false, 00:17:48.457 "method": "bdev_nvme_attach_controller", 00:17:48.457 "req_id": 1 00:17:48.457 } 00:17:48.457 Got JSON-RPC error response 00:17:48.457 response: 00:17:48.457 { 00:17:48.457 "code": -5, 00:17:48.457 "message": "Input/output error" 00:17:48.457 } 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1978925 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1978925 ']' 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1978925 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1978925 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1978925' 00:17:48.457 killing process with pid 1978925 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1978925 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1978925 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2000390 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2000390 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2000390 ']' 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.457 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.717 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.717 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:48.717 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:48.717 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:48.717 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.717 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.717 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:48.717 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2000390 00:17:48.717 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2000390 ']' 00:17:48.717 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.717 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.717 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.717 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.717 15:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.976 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.976 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:48.976 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:48.976 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.976 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.976 null0 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.YA2 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Vn6 ]] 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vn6 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Wt0 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.NE3 ]] 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.NE3 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yxl 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.55M ]] 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.55M 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lTR 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.236 15:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.804 nvme0n1 00:17:50.062 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.062 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.062 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.062 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.062 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.062 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.062 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.062 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.062 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.062 { 00:17:50.062 "cntlid": 1, 00:17:50.062 "qid": 0, 00:17:50.062 "state": "enabled", 00:17:50.062 "thread": "nvmf_tgt_poll_group_000", 00:17:50.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:50.062 "listen_address": { 00:17:50.062 "trtype": "TCP", 00:17:50.062 "adrfam": "IPv4", 00:17:50.062 "traddr": "10.0.0.2", 00:17:50.062 "trsvcid": "4420" 00:17:50.062 }, 00:17:50.062 "peer_address": { 00:17:50.062 "trtype": "TCP", 00:17:50.062 "adrfam": "IPv4", 00:17:50.062 "traddr": "10.0.0.1", 00:17:50.062 "trsvcid": "48354" 00:17:50.062 }, 00:17:50.062 "auth": { 00:17:50.062 "state": "completed", 00:17:50.062 "digest": "sha512", 00:17:50.062 "dhgroup": "ffdhe8192" 00:17:50.062 } 00:17:50.062 } 00:17:50.062 ]' 00:17:50.062 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.320 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.320 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.320 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.320 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.320 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.320 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.320 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.581 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:50.581 15:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:51.149 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.149 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:51.149 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.149 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.149 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.149 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:51.149 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.149 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.149 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.149 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:51.149 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.409 request: 00:17:51.409 { 00:17:51.409 "name": "nvme0", 00:17:51.409 "trtype": "tcp", 00:17:51.409 "traddr": "10.0.0.2", 00:17:51.409 "adrfam": "ipv4", 00:17:51.409 "trsvcid": "4420", 00:17:51.409 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:51.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:51.409 "prchk_reftag": false, 00:17:51.409 "prchk_guard": false, 00:17:51.409 "hdgst": false, 00:17:51.409 "ddgst": false, 00:17:51.409 "dhchap_key": "key3", 00:17:51.409 "allow_unrecognized_csi": false, 00:17:51.409 "method": "bdev_nvme_attach_controller", 00:17:51.409 "req_id": 1 00:17:51.409 } 00:17:51.409 Got JSON-RPC error response 00:17:51.409 response: 00:17:51.409 { 00:17:51.409 "code": -5, 00:17:51.409 "message": "Input/output error" 00:17:51.409 } 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:51.409 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:51.668 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:51.669 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:51.669 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:51.669 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:51.669 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.669 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:51.669 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.669 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:51.669 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.669 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.928 request: 00:17:51.928 { 00:17:51.928 "name": "nvme0", 00:17:51.928 "trtype": "tcp", 00:17:51.928 "traddr": "10.0.0.2", 00:17:51.928 "adrfam": "ipv4", 00:17:51.928 "trsvcid": "4420", 00:17:51.928 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:51.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:51.928 "prchk_reftag": false, 00:17:51.928 "prchk_guard": false, 00:17:51.928 "hdgst": false, 00:17:51.928 "ddgst": false, 00:17:51.928 "dhchap_key": "key3", 00:17:51.928 "allow_unrecognized_csi": false, 00:17:51.928 "method": "bdev_nvme_attach_controller", 00:17:51.928 "req_id": 1 00:17:51.928 } 00:17:51.928 Got JSON-RPC error response 00:17:51.928 response: 00:17:51.928 { 00:17:51.928 "code": -5, 00:17:51.928 "message": "Input/output error" 00:17:51.928 } 00:17:51.928 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:51.928 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:51.928 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:51.928 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:51.928 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:51.928 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:51.928 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:51.928 15:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:51.928 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:51.928 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.187 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.446 request: 00:17:52.446 { 00:17:52.446 "name": "nvme0", 00:17:52.446 "trtype": "tcp", 00:17:52.446 "traddr": "10.0.0.2", 00:17:52.446 "adrfam": "ipv4", 00:17:52.446 "trsvcid": "4420", 00:17:52.446 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:52.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:52.446 "prchk_reftag": false, 00:17:52.446 "prchk_guard": false, 00:17:52.446 "hdgst": false, 00:17:52.446 "ddgst": false, 00:17:52.446 "dhchap_key": "key0", 00:17:52.446 "dhchap_ctrlr_key": "key1", 00:17:52.446 "allow_unrecognized_csi": false, 00:17:52.446 "method": "bdev_nvme_attach_controller", 00:17:52.446 "req_id": 1 00:17:52.446 } 00:17:52.446 Got JSON-RPC error response 00:17:52.446 response: 00:17:52.446 { 00:17:52.446 "code": -5, 00:17:52.446 "message": "Input/output error" 00:17:52.446 } 00:17:52.446 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:52.446 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:52.446 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:52.446 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:52.446 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:52.446 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:52.446 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:52.706 nvme0n1 00:17:52.706 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:52.706 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:52.706 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.965 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.965 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.965 15:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.965 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:17:52.965 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.965 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.225 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.225 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:53.225 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:53.225 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:53.793 nvme0n1 00:17:53.793 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:53.793 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:53.793 15:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.052 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.052 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.052 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.052 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.052 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.052 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:54.052 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:54.052 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.311 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.311 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:54.311 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: --dhchap-ctrl-secret DHHC-1:03:ZjE4ZWM3ZWE3ZTgzZWYwZmFkMmU3NmMwYWRiNWZmNGIzZGE4NjM4YzRhZjk2NzdhNTU0NWVjNTVmOWViYjJkYR/g/fA=: 00:17:54.879 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:54.879 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:54.879 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:54.879 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:54.879 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:54.879 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:54.879 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:54.879 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.879 15:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.879 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:54.879 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:54.879 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:54.879 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:54.879 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.879 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:54.879 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.879 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:54.879 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:54.879 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:55.448 request: 00:17:55.448 { 00:17:55.448 "name": "nvme0", 00:17:55.448 "trtype": "tcp", 00:17:55.448 "traddr": "10.0.0.2", 00:17:55.448 "adrfam": "ipv4", 00:17:55.448 "trsvcid": "4420", 00:17:55.448 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:55.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:17:55.448 "prchk_reftag": false, 00:17:55.448 "prchk_guard": false, 00:17:55.448 "hdgst": false, 00:17:55.448 "ddgst": false, 00:17:55.448 "dhchap_key": "key1", 00:17:55.448 "allow_unrecognized_csi": false, 00:17:55.448 "method": "bdev_nvme_attach_controller", 00:17:55.448 "req_id": 1 00:17:55.448 } 00:17:55.448 Got JSON-RPC error response 00:17:55.448 response: 00:17:55.448 { 00:17:55.448 "code": -5, 00:17:55.448 "message": "Input/output error" 00:17:55.448 } 00:17:55.448 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:55.448 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.448 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.448 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.448 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:55.448 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:55.448 15:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:56.388 nvme0n1 00:17:56.388 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:56.388 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:56.388 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.388 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.388 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.389 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.647 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:56.647 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.647 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.647 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.647 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:56.647 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:56.647 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:56.905 nvme0n1 00:17:56.905 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:56.905 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:56.905 15:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.164 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.164 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.164 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.164 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:57.164 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.164 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.164 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.164 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: '' 2s 00:17:57.164 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:57.164 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:57.423 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: 00:17:57.423 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:57.423 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:57.423 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:57.423 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: ]] 00:17:57.423 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YjUwMTE0NzU1ZTMzNWE2NzU0Zjc3M2RjNzk5MjA3NTHE3gNw: 00:17:57.423 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:57.423 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:57.423 15:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: 2s 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: ]] 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Mjg1MTEyOTAzYmVlYTUxZmE0ODIzNWZiNjA3Y2MyYjZkMGM4ODUwNTU3ZGY0NjY4uIqbrg==: 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:59.420 15:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:01.323 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:01.323 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:01.323 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:01.323 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:01.323 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:01.323 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:01.323 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:01.323 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.323 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:01.323 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.323 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.323 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.323 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:01.323 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:01.323 15:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:02.258 nvme0n1 00:18:02.258 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:02.258 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.258 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.258 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.258 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:02.258 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:02.823 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:02.823 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.823 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:02.823 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.823 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:02.823 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.823 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.823 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.823 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:02.824 15:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:03.082 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:03.082 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:03.082 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.340 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.340 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:03.340 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.340 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.340 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.340 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:03.340 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:03.341 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:03.341 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:03.341 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.341 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:03.341 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.341 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:03.341 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:03.600 request: 00:18:03.600 { 00:18:03.600 "name": "nvme0", 00:18:03.600 "dhchap_key": "key1", 00:18:03.600 "dhchap_ctrlr_key": "key3", 00:18:03.600 "method": "bdev_nvme_set_keys", 00:18:03.600 "req_id": 1 00:18:03.600 } 00:18:03.600 Got JSON-RPC error response 00:18:03.600 response: 00:18:03.600 { 00:18:03.600 "code": -13, 00:18:03.600 "message": "Permission denied" 00:18:03.600 } 00:18:03.600 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:03.600 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:03.600 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:03.600 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:03.600 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:03.600 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.600 15:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:03.858 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:03.858 15:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:05.235 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:05.235 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:05.235 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.235 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:05.235 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:05.235 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.235 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.235 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.235 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:05.235 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:05.235 15:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:05.802 nvme0n1 00:18:05.802 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:05.802 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.802 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.802 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.802 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:05.802 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:05.802 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:05.802 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:05.802 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.802 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:05.802 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.802 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:05.802 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:06.369 request: 00:18:06.369 { 00:18:06.369 "name": "nvme0", 00:18:06.369 "dhchap_key": "key2", 00:18:06.369 "dhchap_ctrlr_key": "key0", 00:18:06.369 "method": "bdev_nvme_set_keys", 00:18:06.369 "req_id": 1 00:18:06.369 } 00:18:06.369 Got JSON-RPC error response 00:18:06.369 response: 00:18:06.369 { 00:18:06.369 "code": -13, 00:18:06.369 "message": "Permission denied" 00:18:06.369 } 00:18:06.369 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:06.369 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:06.369 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:06.369 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:06.369 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:06.369 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.369 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:06.628 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:06.628 15:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:07.563 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:07.563 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:07.563 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.822 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:07.822 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:07.822 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:07.822 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1979049 00:18:07.822 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1979049 ']' 00:18:07.822 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1979049 00:18:07.822 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:07.822 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.822 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1979049 00:18:07.822 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:07.822 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:07.822 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1979049' 00:18:07.822 killing process with pid 1979049 00:18:07.822 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1979049 00:18:07.822 15:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1979049 00:18:08.081 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:08.081 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:08.081 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:08.081 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:08.081 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:08.081 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:08.081 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:08.081 rmmod nvme_tcp 00:18:08.081 rmmod nvme_fabrics 00:18:08.081 rmmod nvme_keyring 00:18:08.081 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:08.081 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:08.081 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:08.081 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2000390 ']' 00:18:08.081 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2000390 00:18:08.081 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2000390 ']' 00:18:08.081 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2000390 00:18:08.081 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:08.081 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.081 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2000390 00:18:08.340 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.340 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.340 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2000390' 00:18:08.340 killing process with pid 2000390 00:18:08.340 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2000390 00:18:08.340 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2000390 00:18:08.340 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:08.340 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:08.340 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:08.340 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:08.340 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:08.340 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:08.340 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:08.340 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:08.340 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:08.340 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.340 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.340 15:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.YA2 /tmp/spdk.key-sha256.Wt0 /tmp/spdk.key-sha384.yxl /tmp/spdk.key-sha512.lTR /tmp/spdk.key-sha512.Vn6 /tmp/spdk.key-sha384.NE3 /tmp/spdk.key-sha256.55M '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:10.875 00:18:10.875 real 2m32.134s 00:18:10.875 user 5m50.666s 00:18:10.875 sys 0m24.108s 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.875 ************************************ 00:18:10.875 END TEST nvmf_auth_target 00:18:10.875 ************************************ 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:10.875 ************************************ 00:18:10.875 START TEST nvmf_bdevio_no_huge 00:18:10.875 ************************************ 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:10.875 * Looking for test storage... 00:18:10.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:10.875 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:10.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.876 --rc genhtml_branch_coverage=1 00:18:10.876 --rc genhtml_function_coverage=1 00:18:10.876 --rc genhtml_legend=1 00:18:10.876 --rc geninfo_all_blocks=1 00:18:10.876 --rc geninfo_unexecuted_blocks=1 00:18:10.876 00:18:10.876 ' 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:10.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.876 --rc genhtml_branch_coverage=1 00:18:10.876 --rc genhtml_function_coverage=1 00:18:10.876 --rc genhtml_legend=1 00:18:10.876 --rc geninfo_all_blocks=1 00:18:10.876 --rc geninfo_unexecuted_blocks=1 00:18:10.876 00:18:10.876 ' 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:10.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.876 --rc genhtml_branch_coverage=1 00:18:10.876 --rc genhtml_function_coverage=1 00:18:10.876 --rc genhtml_legend=1 00:18:10.876 --rc geninfo_all_blocks=1 00:18:10.876 --rc geninfo_unexecuted_blocks=1 00:18:10.876 00:18:10.876 ' 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:10.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.876 --rc genhtml_branch_coverage=1 00:18:10.876 --rc genhtml_function_coverage=1 00:18:10.876 --rc genhtml_legend=1 00:18:10.876 --rc geninfo_all_blocks=1 00:18:10.876 --rc geninfo_unexecuted_blocks=1 00:18:10.876 00:18:10.876 ' 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:10.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:10.876 15:51:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:17.445 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:17.445 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:17.446 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:17.446 Found net devices under 0000:af:00.0: cvl_0_0 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:17.446 Found net devices under 0000:af:00.1: cvl_0_1 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:17.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:18:17.446 00:18:17.446 --- 10.0.0.2 ping statistics --- 00:18:17.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.446 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:18:17.446 00:18:17.446 --- 10.0.0.1 ping statistics --- 00:18:17.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.446 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2007652 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2007652 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2007652 ']' 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.446 15:51:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.446 [2024-12-09 15:51:11.816514] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:18:17.446 [2024-12-09 15:51:11.816565] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:17.446 [2024-12-09 15:51:11.898967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.446 [2024-12-09 15:51:11.945511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.446 [2024-12-09 15:51:11.945544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.446 [2024-12-09 15:51:11.945551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.446 [2024-12-09 15:51:11.945557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.446 [2024-12-09 15:51:11.945562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.446 [2024-12-09 15:51:11.946633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:17.446 [2024-12-09 15:51:11.946743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:17.446 [2024-12-09 15:51:11.946849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:17.446 [2024-12-09 15:51:11.946851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:17.446 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.446 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:17.446 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:17.446 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.447 [2024-12-09 15:51:12.098124] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.447 Malloc0 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.447 [2024-12-09 15:51:12.142418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:17.447 { 00:18:17.447 "params": { 00:18:17.447 "name": "Nvme$subsystem", 00:18:17.447 "trtype": "$TEST_TRANSPORT", 00:18:17.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:17.447 "adrfam": "ipv4", 00:18:17.447 "trsvcid": "$NVMF_PORT", 00:18:17.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:17.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:17.447 "hdgst": ${hdgst:-false}, 00:18:17.447 "ddgst": ${ddgst:-false} 00:18:17.447 }, 00:18:17.447 "method": "bdev_nvme_attach_controller" 00:18:17.447 } 00:18:17.447 EOF 00:18:17.447 )") 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:17.447 15:51:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:17.447 "params": { 00:18:17.447 "name": "Nvme1", 00:18:17.447 "trtype": "tcp", 00:18:17.447 "traddr": "10.0.0.2", 00:18:17.447 "adrfam": "ipv4", 00:18:17.447 "trsvcid": "4420", 00:18:17.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.447 "hdgst": false, 00:18:17.447 "ddgst": false 00:18:17.447 }, 00:18:17.447 "method": "bdev_nvme_attach_controller" 00:18:17.447 }' 00:18:17.447 [2024-12-09 15:51:12.193090] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:18:17.447 [2024-12-09 15:51:12.193134] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2007828 ] 00:18:17.447 [2024-12-09 15:51:12.271717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:17.447 [2024-12-09 15:51:12.319540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.447 [2024-12-09 15:51:12.319644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.447 [2024-12-09 15:51:12.319645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.447 I/O targets: 00:18:17.447 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:17.447 00:18:17.447 00:18:17.447 CUnit - A unit testing framework for C - Version 2.1-3 00:18:17.447 http://cunit.sourceforge.net/ 00:18:17.447 00:18:17.447 00:18:17.447 Suite: bdevio tests on: Nvme1n1 00:18:17.447 Test: blockdev write read block ...passed 00:18:17.447 Test: blockdev write zeroes read block ...passed 00:18:17.447 Test: blockdev write zeroes read no split ...passed 00:18:17.447 Test: blockdev write zeroes read split ...passed 00:18:17.447 Test: blockdev write zeroes read split partial ...passed 00:18:17.447 Test: blockdev reset ...[2024-12-09 15:51:12.644483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:17.447 [2024-12-09 15:51:12.644548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d8ef0 (9): Bad file descriptor 00:18:17.707 [2024-12-09 15:51:12.700378] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:17.707 passed 00:18:17.707 Test: blockdev write read 8 blocks ...passed 00:18:17.707 Test: blockdev write read size > 128k ...passed 00:18:17.707 Test: blockdev write read invalid size ...passed 00:18:17.707 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:17.707 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:17.707 Test: blockdev write read max offset ...passed 00:18:17.707 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:17.707 Test: blockdev writev readv 8 blocks ...passed 00:18:17.707 Test: blockdev writev readv 30 x 1block ...passed 00:18:17.707 Test: blockdev writev readv block ...passed 00:18:17.707 Test: blockdev writev readv size > 128k ...passed 00:18:17.707 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:17.707 Test: blockdev comparev and writev ...[2024-12-09 15:51:12.871935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.707 [2024-12-09 15:51:12.871962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.707 [2024-12-09 15:51:12.871975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.707 [2024-12-09 15:51:12.871983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.707 [2024-12-09 15:51:12.872224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.707 [2024-12-09 15:51:12.872235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:17.707 [2024-12-09 15:51:12.872247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.707 [2024-12-09 15:51:12.872254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:17.707 [2024-12-09 15:51:12.872493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.707 [2024-12-09 15:51:12.872502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:17.707 [2024-12-09 15:51:12.872513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.707 [2024-12-09 15:51:12.872520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:17.707 [2024-12-09 15:51:12.872747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.707 [2024-12-09 15:51:12.872756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:17.707 [2024-12-09 15:51:12.872767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.707 [2024-12-09 15:51:12.872774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:17.707 passed 00:18:17.966 Test: blockdev nvme passthru rw ...passed 00:18:17.966 Test: blockdev nvme passthru vendor specific ...[2024-12-09 15:51:12.955607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:17.966 [2024-12-09 15:51:12.955625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:17.966 [2024-12-09 15:51:12.955730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:17.966 [2024-12-09 15:51:12.955743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:17.966 [2024-12-09 15:51:12.955838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:17.966 [2024-12-09 15:51:12.955847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:17.966 [2024-12-09 15:51:12.955947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:17.966 [2024-12-09 15:51:12.955956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:17.966 passed 00:18:17.966 Test: blockdev nvme admin passthru ...passed 00:18:17.966 Test: blockdev copy ...passed 00:18:17.966 00:18:17.966 Run Summary: Type Total Ran Passed Failed Inactive 00:18:17.966 suites 1 1 n/a 0 0 00:18:17.966 tests 23 23 23 0 0 00:18:17.966 asserts 152 152 152 0 n/a 00:18:17.966 00:18:17.966 Elapsed time = 1.000 seconds 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:18.225 rmmod nvme_tcp 00:18:18.225 rmmod nvme_fabrics 00:18:18.225 rmmod nvme_keyring 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2007652 ']' 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2007652 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2007652 ']' 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2007652 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2007652 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2007652' 00:18:18.225 killing process with pid 2007652 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2007652 00:18:18.225 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2007652 00:18:18.485 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:18.485 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:18.485 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:18.485 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:18.485 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:18.485 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:18.485 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:18.485 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:18.485 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:18.485 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.485 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.485 15:51:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.021 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:21.021 00:18:21.021 real 0m10.122s 00:18:21.021 user 0m10.448s 00:18:21.021 sys 0m5.291s 00:18:21.021 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:21.021 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.021 ************************************ 00:18:21.021 END TEST nvmf_bdevio_no_huge 00:18:21.021 ************************************ 00:18:21.021 15:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:21.021 15:51:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:21.021 15:51:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:21.021 15:51:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:21.021 ************************************ 00:18:21.021 START TEST nvmf_tls 00:18:21.021 ************************************ 00:18:21.021 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:21.021 * Looking for test storage... 00:18:21.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:21.021 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:21.021 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:18:21.021 15:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:21.021 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:21.021 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:21.021 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:21.021 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:21.021 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:21.021 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:21.021 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:21.021 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:21.021 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:21.021 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:21.021 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:21.021 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:21.021 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:21.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.022 --rc genhtml_branch_coverage=1 00:18:21.022 --rc genhtml_function_coverage=1 00:18:21.022 --rc genhtml_legend=1 00:18:21.022 --rc geninfo_all_blocks=1 00:18:21.022 --rc geninfo_unexecuted_blocks=1 00:18:21.022 00:18:21.022 ' 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:21.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.022 --rc genhtml_branch_coverage=1 00:18:21.022 --rc genhtml_function_coverage=1 00:18:21.022 --rc genhtml_legend=1 00:18:21.022 --rc geninfo_all_blocks=1 00:18:21.022 --rc geninfo_unexecuted_blocks=1 00:18:21.022 00:18:21.022 ' 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:21.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.022 --rc genhtml_branch_coverage=1 00:18:21.022 --rc genhtml_function_coverage=1 00:18:21.022 --rc genhtml_legend=1 00:18:21.022 --rc geninfo_all_blocks=1 00:18:21.022 --rc geninfo_unexecuted_blocks=1 00:18:21.022 00:18:21.022 ' 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:21.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.022 --rc genhtml_branch_coverage=1 00:18:21.022 --rc genhtml_function_coverage=1 00:18:21.022 --rc genhtml_legend=1 00:18:21.022 --rc geninfo_all_blocks=1 00:18:21.022 --rc geninfo_unexecuted_blocks=1 00:18:21.022 00:18:21.022 ' 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:21.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:21.022 15:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:27.591 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:27.591 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:27.591 Found net devices under 0000:af:00.0: cvl_0_0 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.591 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:27.591 Found net devices under 0000:af:00.1: cvl_0_1 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:27.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:18:27.592 00:18:27.592 --- 10.0.0.2 ping statistics --- 00:18:27.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.592 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:18:27.592 00:18:27.592 --- 10.0.0.1 ping statistics --- 00:18:27.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.592 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:27.592 15:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2011555 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2011555 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2011555 ']' 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.592 [2024-12-09 15:51:22.051367] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:18:27.592 [2024-12-09 15:51:22.051410] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.592 [2024-12-09 15:51:22.130631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.592 [2024-12-09 15:51:22.169636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.592 [2024-12-09 15:51:22.169671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.592 [2024-12-09 15:51:22.169678] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.592 [2024-12-09 15:51:22.169686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.592 [2024-12-09 15:51:22.169691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.592 [2024-12-09 15:51:22.170214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:27.592 true 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:27.592 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:27.851 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:27.851 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:27.852 15:51:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:28.110 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:28.110 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:28.110 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:28.110 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:28.110 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:28.110 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:28.369 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:28.369 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:28.369 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:28.628 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:28.628 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:28.886 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:28.886 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:28.886 15:51:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:28.886 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:28.886 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.UqKKm3MUqa 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.YGTVrdWGkn 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:29.145 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.UqKKm3MUqa 00:18:29.404 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.YGTVrdWGkn 00:18:29.404 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:29.404 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:29.662 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.UqKKm3MUqa 00:18:29.662 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UqKKm3MUqa 00:18:29.662 15:51:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:29.920 [2024-12-09 15:51:24.996543] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.920 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:30.178 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:30.178 [2024-12-09 15:51:25.341417] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:30.178 [2024-12-09 15:51:25.341626] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.178 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:30.435 malloc0 00:18:30.435 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:30.694 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UqKKm3MUqa 00:18:30.694 15:51:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.952 15:51:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.UqKKm3MUqa 00:18:43.152 Initializing NVMe Controllers 00:18:43.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:43.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:43.152 Initialization complete. Launching workers. 00:18:43.152 ======================================================== 00:18:43.152 Latency(us) 00:18:43.152 Device Information : IOPS MiB/s Average min max 00:18:43.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16944.15 66.19 3777.19 825.27 4510.44 00:18:43.152 ======================================================== 00:18:43.152 Total : 16944.15 66.19 3777.19 825.27 4510.44 00:18:43.152 00:18:43.152 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UqKKm3MUqa 00:18:43.152 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:43.152 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:43.152 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:43.152 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UqKKm3MUqa 00:18:43.152 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:43.152 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2013877 00:18:43.152 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:43.152 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:43.152 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2013877 /var/tmp/bdevperf.sock 00:18:43.152 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2013877 ']' 00:18:43.152 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.152 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.153 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.153 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.153 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.153 [2024-12-09 15:51:36.249287] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:18:43.153 [2024-12-09 15:51:36.249334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2013877 ] 00:18:43.153 [2024-12-09 15:51:36.323704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.153 [2024-12-09 15:51:36.364117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.153 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.153 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:43.153 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UqKKm3MUqa 00:18:43.153 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:43.153 [2024-12-09 15:51:36.807148] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.153 TLSTESTn1 00:18:43.153 15:51:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:43.153 Running I/O for 10 seconds... 00:18:44.087 5498.00 IOPS, 21.48 MiB/s [2024-12-09T14:51:40.249Z] 5502.50 IOPS, 21.49 MiB/s [2024-12-09T14:51:41.183Z] 5551.33 IOPS, 21.68 MiB/s [2024-12-09T14:51:42.118Z] 5531.25 IOPS, 21.61 MiB/s [2024-12-09T14:51:43.053Z] 5556.20 IOPS, 21.70 MiB/s [2024-12-09T14:51:43.988Z] 5569.50 IOPS, 21.76 MiB/s [2024-12-09T14:51:45.363Z] 5588.57 IOPS, 21.83 MiB/s [2024-12-09T14:51:46.296Z] 5597.00 IOPS, 21.86 MiB/s [2024-12-09T14:51:47.231Z] 5614.00 IOPS, 21.93 MiB/s [2024-12-09T14:51:47.231Z] 5618.20 IOPS, 21.95 MiB/s 00:18:52.003 Latency(us) 00:18:52.003 [2024-12-09T14:51:47.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.003 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:52.003 Verification LBA range: start 0x0 length 0x2000 00:18:52.003 TLSTESTn1 : 10.02 5620.17 21.95 0.00 0.00 22738.73 5773.41 22219.82 00:18:52.003 [2024-12-09T14:51:47.231Z] =================================================================================================================== 00:18:52.003 [2024-12-09T14:51:47.231Z] Total : 5620.17 21.95 0.00 0.00 22738.73 5773.41 22219.82 00:18:52.003 { 00:18:52.003 "results": [ 00:18:52.003 { 00:18:52.003 "job": "TLSTESTn1", 00:18:52.003 "core_mask": "0x4", 00:18:52.003 "workload": "verify", 00:18:52.003 "status": "finished", 00:18:52.003 "verify_range": { 00:18:52.003 "start": 0, 00:18:52.003 "length": 8192 00:18:52.003 }, 00:18:52.003 "queue_depth": 128, 00:18:52.003 "io_size": 4096, 00:18:52.003 "runtime": 10.019084, 00:18:52.003 "iops": 5620.174459062326, 00:18:52.003 "mibps": 21.95380648071221, 00:18:52.003 "io_failed": 0, 00:18:52.003 "io_timeout": 0, 00:18:52.003 "avg_latency_us": 22738.73255264108, 00:18:52.003 "min_latency_us": 5773.409523809524, 00:18:52.003 "max_latency_us": 22219.82476190476 00:18:52.003 } 00:18:52.003 ], 00:18:52.003 "core_count": 1 00:18:52.003 } 00:18:52.003 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:52.003 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2013877 00:18:52.003 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2013877 ']' 00:18:52.003 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2013877 00:18:52.003 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.003 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.003 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2013877 00:18:52.003 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:52.003 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:52.003 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2013877' 00:18:52.003 killing process with pid 2013877 00:18:52.003 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2013877 00:18:52.003 Received shutdown signal, test time was about 10.000000 seconds 00:18:52.003 00:18:52.003 Latency(us) 00:18:52.003 [2024-12-09T14:51:47.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.003 [2024-12-09T14:51:47.231Z] =================================================================================================================== 00:18:52.003 [2024-12-09T14:51:47.231Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.003 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2013877 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YGTVrdWGkn 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YGTVrdWGkn 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YGTVrdWGkn 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YGTVrdWGkn 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2015680 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2015680 /var/tmp/bdevperf.sock 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2015680 ']' 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.261 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:52.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:52.262 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.262 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.262 [2024-12-09 15:51:47.289820] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:18:52.262 [2024-12-09 15:51:47.289870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2015680 ] 00:18:52.262 [2024-12-09 15:51:47.359962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.262 [2024-12-09 15:51:47.400355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.262 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.262 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:52.262 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YGTVrdWGkn 00:18:52.520 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:52.778 [2024-12-09 15:51:47.843375] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.778 [2024-12-09 15:51:47.852266] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:52.778 [2024-12-09 15:51:47.852641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252e700 (107): Transport endpoint is not connected 00:18:52.778 [2024-12-09 15:51:47.853634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252e700 (9): Bad file descriptor 00:18:52.778 [2024-12-09 15:51:47.854636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:52.778 [2024-12-09 15:51:47.854646] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:52.778 [2024-12-09 15:51:47.854652] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:52.778 [2024-12-09 15:51:47.854662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:52.778 request: 00:18:52.778 { 00:18:52.778 "name": "TLSTEST", 00:18:52.778 "trtype": "tcp", 00:18:52.778 "traddr": "10.0.0.2", 00:18:52.778 "adrfam": "ipv4", 00:18:52.778 "trsvcid": "4420", 00:18:52.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.778 "prchk_reftag": false, 00:18:52.778 "prchk_guard": false, 00:18:52.778 "hdgst": false, 00:18:52.778 "ddgst": false, 00:18:52.778 "psk": "key0", 00:18:52.778 "allow_unrecognized_csi": false, 00:18:52.778 "method": "bdev_nvme_attach_controller", 00:18:52.778 "req_id": 1 00:18:52.778 } 00:18:52.778 Got JSON-RPC error response 00:18:52.778 response: 00:18:52.778 { 00:18:52.778 "code": -5, 00:18:52.778 "message": "Input/output error" 00:18:52.778 } 00:18:52.778 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2015680 00:18:52.778 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2015680 ']' 00:18:52.778 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2015680 00:18:52.778 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.778 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.778 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2015680 00:18:52.778 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:52.778 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:52.778 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2015680' 00:18:52.778 killing process with pid 2015680 00:18:52.778 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2015680 00:18:52.778 Received shutdown signal, test time was about 10.000000 seconds 00:18:52.778 00:18:52.778 Latency(us) 00:18:52.778 [2024-12-09T14:51:48.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.778 [2024-12-09T14:51:48.006Z] =================================================================================================================== 00:18:52.778 [2024-12-09T14:51:48.006Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:52.778 15:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2015680 00:18:53.037 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:53.037 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:53.037 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.037 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.037 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UqKKm3MUqa 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UqKKm3MUqa 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UqKKm3MUqa 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UqKKm3MUqa 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2015702 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2015702 /var/tmp/bdevperf.sock 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2015702 ']' 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.038 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.038 [2024-12-09 15:51:48.134842] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:18:53.038 [2024-12-09 15:51:48.134891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2015702 ] 00:18:53.038 [2024-12-09 15:51:48.205445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.038 [2024-12-09 15:51:48.241639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.296 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.296 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:53.296 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UqKKm3MUqa 00:18:53.555 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:53.555 [2024-12-09 15:51:48.721562] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:53.555 [2024-12-09 15:51:48.726182] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:53.555 [2024-12-09 15:51:48.726202] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:53.555 [2024-12-09 15:51:48.726248] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:53.555 [2024-12-09 15:51:48.726886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1305700 (107): Transport endpoint is not connected 00:18:53.555 [2024-12-09 15:51:48.727879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1305700 (9): Bad file descriptor 00:18:53.555 [2024-12-09 15:51:48.728880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:53.555 [2024-12-09 15:51:48.728889] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:53.555 [2024-12-09 15:51:48.728897] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:53.555 [2024-12-09 15:51:48.728907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:53.555 request: 00:18:53.555 { 00:18:53.555 "name": "TLSTEST", 00:18:53.555 "trtype": "tcp", 00:18:53.555 "traddr": "10.0.0.2", 00:18:53.555 "adrfam": "ipv4", 00:18:53.555 "trsvcid": "4420", 00:18:53.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.555 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:53.555 "prchk_reftag": false, 00:18:53.555 "prchk_guard": false, 00:18:53.555 "hdgst": false, 00:18:53.555 "ddgst": false, 00:18:53.555 "psk": "key0", 00:18:53.555 "allow_unrecognized_csi": false, 00:18:53.555 "method": "bdev_nvme_attach_controller", 00:18:53.555 "req_id": 1 00:18:53.555 } 00:18:53.555 Got JSON-RPC error response 00:18:53.555 response: 00:18:53.555 { 00:18:53.555 "code": -5, 00:18:53.555 "message": "Input/output error" 00:18:53.555 } 00:18:53.555 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2015702 00:18:53.555 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2015702 ']' 00:18:53.555 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2015702 00:18:53.555 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:53.555 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.555 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2015702 00:18:53.813 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2015702' 00:18:53.814 killing process with pid 2015702 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2015702 00:18:53.814 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.814 00:18:53.814 Latency(us) 00:18:53.814 [2024-12-09T14:51:49.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.814 [2024-12-09T14:51:49.042Z] =================================================================================================================== 00:18:53.814 [2024-12-09T14:51:49.042Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2015702 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UqKKm3MUqa 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UqKKm3MUqa 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UqKKm3MUqa 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UqKKm3MUqa 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2015931 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2015931 /var/tmp/bdevperf.sock 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2015931 ']' 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.814 15:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.814 [2024-12-09 15:51:49.012477] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:18:53.814 [2024-12-09 15:51:49.012527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2015931 ] 00:18:54.072 [2024-12-09 15:51:49.084454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.072 [2024-12-09 15:51:49.120428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.072 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.072 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:54.072 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UqKKm3MUqa 00:18:54.331 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:54.589 [2024-12-09 15:51:49.579446] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.589 [2024-12-09 15:51:49.586579] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:54.589 [2024-12-09 15:51:49.586600] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:54.589 [2024-12-09 15:51:49.586638] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:54.589 [2024-12-09 15:51:49.586733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a79700 (107): Transport endpoint is not connected 00:18:54.589 [2024-12-09 15:51:49.587725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a79700 (9): Bad file descriptor 00:18:54.589 [2024-12-09 15:51:49.588727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:54.589 [2024-12-09 15:51:49.588737] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:54.589 [2024-12-09 15:51:49.588744] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:54.589 [2024-12-09 15:51:49.588754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:54.589 request: 00:18:54.589 { 00:18:54.589 "name": "TLSTEST", 00:18:54.589 "trtype": "tcp", 00:18:54.589 "traddr": "10.0.0.2", 00:18:54.589 "adrfam": "ipv4", 00:18:54.589 "trsvcid": "4420", 00:18:54.589 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:54.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.589 "prchk_reftag": false, 00:18:54.590 "prchk_guard": false, 00:18:54.590 "hdgst": false, 00:18:54.590 "ddgst": false, 00:18:54.590 "psk": "key0", 00:18:54.590 "allow_unrecognized_csi": false, 00:18:54.590 "method": "bdev_nvme_attach_controller", 00:18:54.590 "req_id": 1 00:18:54.590 } 00:18:54.590 Got JSON-RPC error response 00:18:54.590 response: 00:18:54.590 { 00:18:54.590 "code": -5, 00:18:54.590 "message": "Input/output error" 00:18:54.590 } 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2015931 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2015931 ']' 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2015931 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2015931 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2015931' 00:18:54.590 killing process with pid 2015931 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2015931 00:18:54.590 Received shutdown signal, test time was about 10.000000 seconds 00:18:54.590 00:18:54.590 Latency(us) 00:18:54.590 [2024-12-09T14:51:49.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.590 [2024-12-09T14:51:49.818Z] =================================================================================================================== 00:18:54.590 [2024-12-09T14:51:49.818Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2015931 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:54.590 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2016075 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2016075 /var/tmp/bdevperf.sock 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2016075 ']' 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.848 15:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.848 [2024-12-09 15:51:49.866075] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:18:54.848 [2024-12-09 15:51:49.866123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016075 ] 00:18:54.848 [2024-12-09 15:51:49.940382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.848 [2024-12-09 15:51:49.978087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.106 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.106 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:55.106 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:55.106 [2024-12-09 15:51:50.250576] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:55.106 [2024-12-09 15:51:50.250610] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:55.106 request: 00:18:55.106 { 00:18:55.106 "name": "key0", 00:18:55.106 "path": "", 00:18:55.106 "method": "keyring_file_add_key", 00:18:55.106 "req_id": 1 00:18:55.106 } 00:18:55.106 Got JSON-RPC error response 00:18:55.106 response: 00:18:55.106 { 00:18:55.106 "code": -1, 00:18:55.106 "message": "Operation not permitted" 00:18:55.106 } 00:18:55.106 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:55.365 [2024-12-09 15:51:50.455182] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.365 [2024-12-09 15:51:50.455209] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:55.365 request: 00:18:55.365 { 00:18:55.365 "name": "TLSTEST", 00:18:55.365 "trtype": "tcp", 00:18:55.365 "traddr": "10.0.0.2", 00:18:55.365 "adrfam": "ipv4", 00:18:55.365 "trsvcid": "4420", 00:18:55.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.365 "prchk_reftag": false, 00:18:55.365 "prchk_guard": false, 00:18:55.365 "hdgst": false, 00:18:55.365 "ddgst": false, 00:18:55.365 "psk": "key0", 00:18:55.365 "allow_unrecognized_csi": false, 00:18:55.365 "method": "bdev_nvme_attach_controller", 00:18:55.365 "req_id": 1 00:18:55.365 } 00:18:55.365 Got JSON-RPC error response 00:18:55.365 response: 00:18:55.365 { 00:18:55.365 "code": -126, 00:18:55.365 "message": "Required key not available" 00:18:55.365 } 00:18:55.365 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2016075 00:18:55.365 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2016075 ']' 00:18:55.365 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2016075 00:18:55.365 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:55.365 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.365 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2016075 00:18:55.365 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:55.365 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:55.365 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2016075' 00:18:55.365 killing process with pid 2016075 00:18:55.365 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2016075 00:18:55.365 Received shutdown signal, test time was about 10.000000 seconds 00:18:55.365 00:18:55.365 Latency(us) 00:18:55.365 [2024-12-09T14:51:50.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.365 [2024-12-09T14:51:50.593Z] =================================================================================================================== 00:18:55.365 [2024-12-09T14:51:50.593Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:55.365 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2016075 00:18:55.623 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:55.623 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:55.623 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.623 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.623 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.623 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2011555 00:18:55.623 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2011555 ']' 00:18:55.623 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2011555 00:18:55.623 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:55.623 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.623 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2011555 00:18:55.623 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:55.623 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:55.623 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2011555' 00:18:55.623 killing process with pid 2011555 00:18:55.623 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2011555 00:18:55.623 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2011555 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.HheesVvW1O 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.HheesVvW1O 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2016195 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2016195 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2016195 ']' 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.883 15:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.883 [2024-12-09 15:51:50.985875] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:18:55.883 [2024-12-09 15:51:50.985921] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.883 [2024-12-09 15:51:51.065044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.883 [2024-12-09 15:51:51.103117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.883 [2024-12-09 15:51:51.103157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.883 [2024-12-09 15:51:51.103164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.883 [2024-12-09 15:51:51.103170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.883 [2024-12-09 15:51:51.103175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.883 [2024-12-09 15:51:51.103719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.141 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.141 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:56.141 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:56.141 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:56.141 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.141 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.141 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.HheesVvW1O 00:18:56.141 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.HheesVvW1O 00:18:56.141 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:56.400 [2024-12-09 15:51:51.420133] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.400 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:56.400 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:56.658 [2024-12-09 15:51:51.785069] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:56.658 [2024-12-09 15:51:51.785270] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.658 15:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:56.916 malloc0 00:18:56.916 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:57.174 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.HheesVvW1O 00:18:57.175 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:57.469 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HheesVvW1O 00:18:57.469 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:57.469 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:57.469 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:57.469 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.HheesVvW1O 00:18:57.469 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:57.469 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:57.469 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2016553 00:18:57.469 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:57.469 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2016553 /var/tmp/bdevperf.sock 00:18:57.469 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2016553 ']' 00:18:57.469 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:57.469 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.469 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:57.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:57.469 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.469 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.469 [2024-12-09 15:51:52.584159] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:18:57.469 [2024-12-09 15:51:52.584202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016553 ] 00:18:57.469 [2024-12-09 15:51:52.656801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.781 [2024-12-09 15:51:52.698463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.781 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.781 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:57.781 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HheesVvW1O 00:18:57.781 15:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:58.039 [2024-12-09 15:51:53.145649] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:58.039 TLSTESTn1 00:18:58.039 15:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:58.297 Running I/O for 10 seconds... 00:19:00.166 5298.00 IOPS, 20.70 MiB/s [2024-12-09T14:51:56.769Z] 5335.00 IOPS, 20.84 MiB/s [2024-12-09T14:51:57.703Z] 5450.00 IOPS, 21.29 MiB/s [2024-12-09T14:51:58.637Z] 5481.75 IOPS, 21.41 MiB/s [2024-12-09T14:51:59.572Z] 5532.20 IOPS, 21.61 MiB/s [2024-12-09T14:52:00.506Z] 5471.50 IOPS, 21.37 MiB/s [2024-12-09T14:52:01.441Z] 5422.86 IOPS, 21.18 MiB/s [2024-12-09T14:52:02.375Z] 5365.88 IOPS, 20.96 MiB/s [2024-12-09T14:52:03.751Z] 5312.11 IOPS, 20.75 MiB/s [2024-12-09T14:52:03.751Z] 5259.70 IOPS, 20.55 MiB/s 00:19:08.523 Latency(us) 00:19:08.523 [2024-12-09T14:52:03.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.523 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:08.523 Verification LBA range: start 0x0 length 0x2000 00:19:08.523 TLSTESTn1 : 10.02 5263.62 20.56 0.00 0.00 24282.29 6303.94 31457.28 00:19:08.523 [2024-12-09T14:52:03.751Z] =================================================================================================================== 00:19:08.523 [2024-12-09T14:52:03.751Z] Total : 5263.62 20.56 0.00 0.00 24282.29 6303.94 31457.28 00:19:08.523 { 00:19:08.523 "results": [ 00:19:08.523 { 00:19:08.523 "job": "TLSTESTn1", 00:19:08.523 "core_mask": "0x4", 00:19:08.523 "workload": "verify", 00:19:08.523 "status": "finished", 00:19:08.523 "verify_range": { 00:19:08.523 "start": 0, 00:19:08.523 "length": 8192 00:19:08.523 }, 00:19:08.523 "queue_depth": 128, 00:19:08.523 "io_size": 4096, 00:19:08.523 "runtime": 10.01668, 00:19:08.523 "iops": 5263.620281370674, 00:19:08.523 "mibps": 20.561016724104196, 00:19:08.523 "io_failed": 0, 00:19:08.523 "io_timeout": 0, 00:19:08.523 "avg_latency_us": 24282.29359286997, 00:19:08.523 "min_latency_us": 6303.939047619047, 00:19:08.523 "max_latency_us": 31457.28 00:19:08.523 } 00:19:08.523 ], 00:19:08.523 "core_count": 1 00:19:08.523 } 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2016553 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2016553 ']' 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2016553 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2016553 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2016553' 00:19:08.523 killing process with pid 2016553 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2016553 00:19:08.523 Received shutdown signal, test time was about 10.000000 seconds 00:19:08.523 00:19:08.523 Latency(us) 00:19:08.523 [2024-12-09T14:52:03.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.523 [2024-12-09T14:52:03.751Z] =================================================================================================================== 00:19:08.523 [2024-12-09T14:52:03.751Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2016553 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.HheesVvW1O 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HheesVvW1O 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HheesVvW1O 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HheesVvW1O 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.HheesVvW1O 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2018263 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2018263 /var/tmp/bdevperf.sock 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2018263 ']' 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.523 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.523 [2024-12-09 15:52:03.658472] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:19:08.523 [2024-12-09 15:52:03.658519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018263 ] 00:19:08.523 [2024-12-09 15:52:03.731425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.782 [2024-12-09 15:52:03.772893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.782 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.782 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:08.782 15:52:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HheesVvW1O 00:19:09.040 [2024-12-09 15:52:04.036737] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.HheesVvW1O': 0100666 00:19:09.040 [2024-12-09 15:52:04.036763] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:09.040 request: 00:19:09.040 { 00:19:09.040 "name": "key0", 00:19:09.040 "path": "/tmp/tmp.HheesVvW1O", 00:19:09.040 "method": "keyring_file_add_key", 00:19:09.040 "req_id": 1 00:19:09.040 } 00:19:09.040 Got JSON-RPC error response 00:19:09.040 response: 00:19:09.040 { 00:19:09.040 "code": -1, 00:19:09.040 "message": "Operation not permitted" 00:19:09.040 } 00:19:09.040 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:09.040 [2024-12-09 15:52:04.221290] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:09.040 [2024-12-09 15:52:04.221318] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:09.040 request: 00:19:09.040 { 00:19:09.040 "name": "TLSTEST", 00:19:09.040 "trtype": "tcp", 00:19:09.040 "traddr": "10.0.0.2", 00:19:09.040 "adrfam": "ipv4", 00:19:09.040 "trsvcid": "4420", 00:19:09.040 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.040 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.040 "prchk_reftag": false, 00:19:09.040 "prchk_guard": false, 00:19:09.040 "hdgst": false, 00:19:09.040 "ddgst": false, 00:19:09.040 "psk": "key0", 00:19:09.040 "allow_unrecognized_csi": false, 00:19:09.040 "method": "bdev_nvme_attach_controller", 00:19:09.040 "req_id": 1 00:19:09.040 } 00:19:09.040 Got JSON-RPC error response 00:19:09.040 response: 00:19:09.040 { 00:19:09.040 "code": -126, 00:19:09.040 "message": "Required key not available" 00:19:09.040 } 00:19:09.040 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2018263 00:19:09.040 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2018263 ']' 00:19:09.040 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2018263 00:19:09.040 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:09.040 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.040 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2018263 00:19:09.298 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:09.298 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:09.298 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2018263' 00:19:09.298 killing process with pid 2018263 00:19:09.298 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2018263 00:19:09.298 Received shutdown signal, test time was about 10.000000 seconds 00:19:09.298 00:19:09.298 Latency(us) 00:19:09.298 [2024-12-09T14:52:04.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.298 [2024-12-09T14:52:04.526Z] =================================================================================================================== 00:19:09.298 [2024-12-09T14:52:04.526Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:09.298 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2018263 00:19:09.298 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:09.298 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:09.298 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:09.298 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:09.299 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:09.299 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2016195 00:19:09.299 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2016195 ']' 00:19:09.299 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2016195 00:19:09.299 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:09.299 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.299 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2016195 00:19:09.299 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:09.299 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:09.299 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2016195' 00:19:09.299 killing process with pid 2016195 00:19:09.299 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2016195 00:19:09.299 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2016195 00:19:09.557 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:09.557 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:09.557 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:09.557 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.557 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2018499 00:19:09.557 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:09.557 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2018499 00:19:09.557 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2018499 ']' 00:19:09.557 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.557 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.557 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.557 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.557 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.557 [2024-12-09 15:52:04.731210] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:19:09.557 [2024-12-09 15:52:04.731263] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.815 [2024-12-09 15:52:04.811177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.815 [2024-12-09 15:52:04.849636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.815 [2024-12-09 15:52:04.849670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.815 [2024-12-09 15:52:04.849677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.815 [2024-12-09 15:52:04.849683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.815 [2024-12-09 15:52:04.849689] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.815 [2024-12-09 15:52:04.850190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.815 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.815 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:09.815 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:09.815 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:09.815 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.815 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.815 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.HheesVvW1O 00:19:09.815 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:09.815 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.HheesVvW1O 00:19:09.815 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:09.815 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.815 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:09.815 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.815 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.HheesVvW1O 00:19:09.815 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.HheesVvW1O 00:19:09.816 15:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:10.073 [2024-12-09 15:52:05.156745] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.073 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:10.330 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:10.330 [2024-12-09 15:52:05.517658] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:10.330 [2024-12-09 15:52:05.517847] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.330 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:10.588 malloc0 00:19:10.588 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:10.846 15:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.HheesVvW1O 00:19:10.846 [2024-12-09 15:52:06.067022] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.HheesVvW1O': 0100666 00:19:10.846 [2024-12-09 15:52:06.067048] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:10.846 request: 00:19:10.846 { 00:19:10.846 "name": "key0", 00:19:10.846 "path": "/tmp/tmp.HheesVvW1O", 00:19:10.846 "method": "keyring_file_add_key", 00:19:10.846 "req_id": 1 00:19:10.846 } 00:19:10.846 Got JSON-RPC error response 00:19:10.846 response: 00:19:10.846 { 00:19:10.846 "code": -1, 00:19:10.847 "message": "Operation not permitted" 00:19:10.847 } 00:19:11.104 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:11.104 [2024-12-09 15:52:06.251532] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:11.104 [2024-12-09 15:52:06.251577] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:11.104 request: 00:19:11.104 { 00:19:11.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.104 "host": "nqn.2016-06.io.spdk:host1", 00:19:11.104 "psk": "key0", 00:19:11.104 "method": "nvmf_subsystem_add_host", 00:19:11.104 "req_id": 1 00:19:11.104 } 00:19:11.104 Got JSON-RPC error response 00:19:11.104 response: 00:19:11.104 { 00:19:11.104 "code": -32603, 00:19:11.104 "message": "Internal error" 00:19:11.104 } 00:19:11.104 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:11.104 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:11.104 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:11.104 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:11.104 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2018499 00:19:11.104 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2018499 ']' 00:19:11.104 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2018499 00:19:11.104 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:11.104 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.104 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2018499 00:19:11.104 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:11.104 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:11.104 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2018499' 00:19:11.104 killing process with pid 2018499 00:19:11.104 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2018499 00:19:11.104 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2018499 00:19:11.362 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.HheesVvW1O 00:19:11.362 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:11.362 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:11.362 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:11.362 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.362 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2018782 00:19:11.362 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2018782 00:19:11.362 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:11.362 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2018782 ']' 00:19:11.362 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.362 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.362 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.362 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.362 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.362 [2024-12-09 15:52:06.544327] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:19:11.362 [2024-12-09 15:52:06.544374] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.620 [2024-12-09 15:52:06.621678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.620 [2024-12-09 15:52:06.660024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.620 [2024-12-09 15:52:06.660061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.620 [2024-12-09 15:52:06.660068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.620 [2024-12-09 15:52:06.660074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.620 [2024-12-09 15:52:06.660079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.620 [2024-12-09 15:52:06.660616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.620 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.620 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:11.620 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:11.620 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.620 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.620 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.620 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.HheesVvW1O 00:19:11.620 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.HheesVvW1O 00:19:11.620 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:11.879 [2024-12-09 15:52:06.967542] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.879 15:52:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:12.137 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:12.137 [2024-12-09 15:52:07.332465] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:12.137 [2024-12-09 15:52:07.332660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.137 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:12.394 malloc0 00:19:12.394 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:12.652 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.HheesVvW1O 00:19:12.911 15:52:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:12.911 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2019140 00:19:12.911 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:12.911 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:12.911 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2019140 /var/tmp/bdevperf.sock 00:19:12.911 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2019140 ']' 00:19:12.911 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.911 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.911 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.911 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.911 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.911 [2024-12-09 15:52:08.124579] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:19:12.911 [2024-12-09 15:52:08.124627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2019140 ] 00:19:13.169 [2024-12-09 15:52:08.198197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.169 [2024-12-09 15:52:08.238609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.169 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.169 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:13.169 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HheesVvW1O 00:19:13.428 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:13.686 [2024-12-09 15:52:08.690828] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.686 TLSTESTn1 00:19:13.686 15:52:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:13.945 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:13.945 "subsystems": [ 00:19:13.945 { 00:19:13.945 "subsystem": "keyring", 00:19:13.945 "config": [ 00:19:13.945 { 00:19:13.945 "method": "keyring_file_add_key", 00:19:13.945 "params": { 00:19:13.945 "name": "key0", 00:19:13.945 "path": "/tmp/tmp.HheesVvW1O" 00:19:13.945 } 00:19:13.945 } 00:19:13.945 ] 00:19:13.945 }, 00:19:13.945 { 00:19:13.945 "subsystem": "iobuf", 00:19:13.945 "config": [ 00:19:13.945 { 00:19:13.945 "method": "iobuf_set_options", 00:19:13.945 "params": { 00:19:13.945 "small_pool_count": 8192, 00:19:13.945 "large_pool_count": 1024, 00:19:13.945 "small_bufsize": 8192, 00:19:13.945 "large_bufsize": 135168, 00:19:13.945 "enable_numa": false 00:19:13.945 } 00:19:13.945 } 00:19:13.945 ] 00:19:13.945 }, 00:19:13.945 { 00:19:13.945 "subsystem": "sock", 00:19:13.945 "config": [ 00:19:13.945 { 00:19:13.945 "method": "sock_set_default_impl", 00:19:13.945 "params": { 00:19:13.945 "impl_name": "posix" 00:19:13.945 } 00:19:13.945 }, 00:19:13.945 { 00:19:13.945 "method": "sock_impl_set_options", 00:19:13.945 "params": { 00:19:13.945 "impl_name": "ssl", 00:19:13.945 "recv_buf_size": 4096, 00:19:13.945 "send_buf_size": 4096, 00:19:13.945 "enable_recv_pipe": true, 00:19:13.945 "enable_quickack": false, 00:19:13.945 "enable_placement_id": 0, 00:19:13.945 "enable_zerocopy_send_server": true, 00:19:13.945 "enable_zerocopy_send_client": false, 00:19:13.945 "zerocopy_threshold": 0, 00:19:13.945 "tls_version": 0, 00:19:13.945 "enable_ktls": false 00:19:13.945 } 00:19:13.945 }, 00:19:13.945 { 00:19:13.945 "method": "sock_impl_set_options", 00:19:13.945 "params": { 00:19:13.945 "impl_name": "posix", 00:19:13.945 "recv_buf_size": 2097152, 00:19:13.945 "send_buf_size": 2097152, 00:19:13.946 "enable_recv_pipe": true, 00:19:13.946 "enable_quickack": false, 00:19:13.946 "enable_placement_id": 0, 00:19:13.946 "enable_zerocopy_send_server": true, 00:19:13.946 "enable_zerocopy_send_client": false, 00:19:13.946 "zerocopy_threshold": 0, 00:19:13.946 "tls_version": 0, 00:19:13.946 "enable_ktls": false 00:19:13.946 } 00:19:13.946 } 00:19:13.946 ] 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "subsystem": "vmd", 00:19:13.946 "config": [] 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "subsystem": "accel", 00:19:13.946 "config": [ 00:19:13.946 { 00:19:13.946 "method": "accel_set_options", 00:19:13.946 "params": { 00:19:13.946 "small_cache_size": 128, 00:19:13.946 "large_cache_size": 16, 00:19:13.946 "task_count": 2048, 00:19:13.946 "sequence_count": 2048, 00:19:13.946 "buf_count": 2048 00:19:13.946 } 00:19:13.946 } 00:19:13.946 ] 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "subsystem": "bdev", 00:19:13.946 "config": [ 00:19:13.946 { 00:19:13.946 "method": "bdev_set_options", 00:19:13.946 "params": { 00:19:13.946 "bdev_io_pool_size": 65535, 00:19:13.946 "bdev_io_cache_size": 256, 00:19:13.946 "bdev_auto_examine": true, 00:19:13.946 "iobuf_small_cache_size": 128, 00:19:13.946 "iobuf_large_cache_size": 16 00:19:13.946 } 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "method": "bdev_raid_set_options", 00:19:13.946 "params": { 00:19:13.946 "process_window_size_kb": 1024, 00:19:13.946 "process_max_bandwidth_mb_sec": 0 00:19:13.946 } 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "method": "bdev_iscsi_set_options", 00:19:13.946 "params": { 00:19:13.946 "timeout_sec": 30 00:19:13.946 } 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "method": "bdev_nvme_set_options", 00:19:13.946 "params": { 00:19:13.946 "action_on_timeout": "none", 00:19:13.946 "timeout_us": 0, 00:19:13.946 "timeout_admin_us": 0, 00:19:13.946 "keep_alive_timeout_ms": 10000, 00:19:13.946 "arbitration_burst": 0, 00:19:13.946 "low_priority_weight": 0, 00:19:13.946 "medium_priority_weight": 0, 00:19:13.946 "high_priority_weight": 0, 00:19:13.946 "nvme_adminq_poll_period_us": 10000, 00:19:13.946 "nvme_ioq_poll_period_us": 0, 00:19:13.946 "io_queue_requests": 0, 00:19:13.946 "delay_cmd_submit": true, 00:19:13.946 "transport_retry_count": 4, 00:19:13.946 "bdev_retry_count": 3, 00:19:13.946 "transport_ack_timeout": 0, 00:19:13.946 "ctrlr_loss_timeout_sec": 0, 00:19:13.946 "reconnect_delay_sec": 0, 00:19:13.946 "fast_io_fail_timeout_sec": 0, 00:19:13.946 "disable_auto_failback": false, 00:19:13.946 "generate_uuids": false, 00:19:13.946 "transport_tos": 0, 00:19:13.946 "nvme_error_stat": false, 00:19:13.946 "rdma_srq_size": 0, 00:19:13.946 "io_path_stat": false, 00:19:13.946 "allow_accel_sequence": false, 00:19:13.946 "rdma_max_cq_size": 0, 00:19:13.946 "rdma_cm_event_timeout_ms": 0, 00:19:13.946 "dhchap_digests": [ 00:19:13.946 "sha256", 00:19:13.946 "sha384", 00:19:13.946 "sha512" 00:19:13.946 ], 00:19:13.946 "dhchap_dhgroups": [ 00:19:13.946 "null", 00:19:13.946 "ffdhe2048", 00:19:13.946 "ffdhe3072", 00:19:13.946 "ffdhe4096", 00:19:13.946 "ffdhe6144", 00:19:13.946 "ffdhe8192" 00:19:13.946 ] 00:19:13.946 } 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "method": "bdev_nvme_set_hotplug", 00:19:13.946 "params": { 00:19:13.946 "period_us": 100000, 00:19:13.946 "enable": false 00:19:13.946 } 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "method": "bdev_malloc_create", 00:19:13.946 "params": { 00:19:13.946 "name": "malloc0", 00:19:13.946 "num_blocks": 8192, 00:19:13.946 "block_size": 4096, 00:19:13.946 "physical_block_size": 4096, 00:19:13.946 "uuid": "6cae84a4-53e1-42ec-8475-d1ac843273c2", 00:19:13.946 "optimal_io_boundary": 0, 00:19:13.946 "md_size": 0, 00:19:13.946 "dif_type": 0, 00:19:13.946 "dif_is_head_of_md": false, 00:19:13.946 "dif_pi_format": 0 00:19:13.946 } 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "method": "bdev_wait_for_examine" 00:19:13.946 } 00:19:13.946 ] 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "subsystem": "nbd", 00:19:13.946 "config": [] 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "subsystem": "scheduler", 00:19:13.946 "config": [ 00:19:13.946 { 00:19:13.946 "method": "framework_set_scheduler", 00:19:13.946 "params": { 00:19:13.946 "name": "static" 00:19:13.946 } 00:19:13.946 } 00:19:13.946 ] 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "subsystem": "nvmf", 00:19:13.946 "config": [ 00:19:13.946 { 00:19:13.946 "method": "nvmf_set_config", 00:19:13.946 "params": { 00:19:13.946 "discovery_filter": "match_any", 00:19:13.946 "admin_cmd_passthru": { 00:19:13.946 "identify_ctrlr": false 00:19:13.946 }, 00:19:13.946 "dhchap_digests": [ 00:19:13.946 "sha256", 00:19:13.946 "sha384", 00:19:13.946 "sha512" 00:19:13.946 ], 00:19:13.946 "dhchap_dhgroups": [ 00:19:13.946 "null", 00:19:13.946 "ffdhe2048", 00:19:13.946 "ffdhe3072", 00:19:13.946 "ffdhe4096", 00:19:13.946 "ffdhe6144", 00:19:13.946 "ffdhe8192" 00:19:13.946 ] 00:19:13.946 } 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "method": "nvmf_set_max_subsystems", 00:19:13.946 "params": { 00:19:13.946 "max_subsystems": 1024 00:19:13.946 } 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "method": "nvmf_set_crdt", 00:19:13.946 "params": { 00:19:13.946 "crdt1": 0, 00:19:13.946 "crdt2": 0, 00:19:13.946 "crdt3": 0 00:19:13.946 } 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "method": "nvmf_create_transport", 00:19:13.946 "params": { 00:19:13.946 "trtype": "TCP", 00:19:13.946 "max_queue_depth": 128, 00:19:13.946 "max_io_qpairs_per_ctrlr": 127, 00:19:13.946 "in_capsule_data_size": 4096, 00:19:13.946 "max_io_size": 131072, 00:19:13.946 "io_unit_size": 131072, 00:19:13.946 "max_aq_depth": 128, 00:19:13.946 "num_shared_buffers": 511, 00:19:13.946 "buf_cache_size": 4294967295, 00:19:13.946 "dif_insert_or_strip": false, 00:19:13.946 "zcopy": false, 00:19:13.946 "c2h_success": false, 00:19:13.946 "sock_priority": 0, 00:19:13.946 "abort_timeout_sec": 1, 00:19:13.946 "ack_timeout": 0, 00:19:13.946 "data_wr_pool_size": 0 00:19:13.946 } 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "method": "nvmf_create_subsystem", 00:19:13.946 "params": { 00:19:13.946 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.946 "allow_any_host": false, 00:19:13.946 "serial_number": "SPDK00000000000001", 00:19:13.946 "model_number": "SPDK bdev Controller", 00:19:13.946 "max_namespaces": 10, 00:19:13.946 "min_cntlid": 1, 00:19:13.946 "max_cntlid": 65519, 00:19:13.946 "ana_reporting": false 00:19:13.946 } 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "method": "nvmf_subsystem_add_host", 00:19:13.946 "params": { 00:19:13.946 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.946 "host": "nqn.2016-06.io.spdk:host1", 00:19:13.946 "psk": "key0" 00:19:13.946 } 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "method": "nvmf_subsystem_add_ns", 00:19:13.946 "params": { 00:19:13.946 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.946 "namespace": { 00:19:13.946 "nsid": 1, 00:19:13.946 "bdev_name": "malloc0", 00:19:13.946 "nguid": "6CAE84A453E142EC8475D1AC843273C2", 00:19:13.946 "uuid": "6cae84a4-53e1-42ec-8475-d1ac843273c2", 00:19:13.946 "no_auto_visible": false 00:19:13.946 } 00:19:13.946 } 00:19:13.946 }, 00:19:13.946 { 00:19:13.946 "method": "nvmf_subsystem_add_listener", 00:19:13.946 "params": { 00:19:13.946 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.946 "listen_address": { 00:19:13.946 "trtype": "TCP", 00:19:13.946 "adrfam": "IPv4", 00:19:13.946 "traddr": "10.0.0.2", 00:19:13.946 "trsvcid": "4420" 00:19:13.946 }, 00:19:13.946 "secure_channel": true 00:19:13.946 } 00:19:13.946 } 00:19:13.946 ] 00:19:13.947 } 00:19:13.947 ] 00:19:13.947 }' 00:19:13.947 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:14.206 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:14.206 "subsystems": [ 00:19:14.206 { 00:19:14.206 "subsystem": "keyring", 00:19:14.206 "config": [ 00:19:14.206 { 00:19:14.206 "method": "keyring_file_add_key", 00:19:14.206 "params": { 00:19:14.206 "name": "key0", 00:19:14.206 "path": "/tmp/tmp.HheesVvW1O" 00:19:14.206 } 00:19:14.206 } 00:19:14.206 ] 00:19:14.206 }, 00:19:14.206 { 00:19:14.206 "subsystem": "iobuf", 00:19:14.206 "config": [ 00:19:14.206 { 00:19:14.206 "method": "iobuf_set_options", 00:19:14.206 "params": { 00:19:14.206 "small_pool_count": 8192, 00:19:14.206 "large_pool_count": 1024, 00:19:14.206 "small_bufsize": 8192, 00:19:14.206 "large_bufsize": 135168, 00:19:14.206 "enable_numa": false 00:19:14.206 } 00:19:14.206 } 00:19:14.206 ] 00:19:14.206 }, 00:19:14.206 { 00:19:14.206 "subsystem": "sock", 00:19:14.206 "config": [ 00:19:14.206 { 00:19:14.206 "method": "sock_set_default_impl", 00:19:14.206 "params": { 00:19:14.206 "impl_name": "posix" 00:19:14.206 } 00:19:14.206 }, 00:19:14.206 { 00:19:14.206 "method": "sock_impl_set_options", 00:19:14.206 "params": { 00:19:14.206 "impl_name": "ssl", 00:19:14.206 "recv_buf_size": 4096, 00:19:14.206 "send_buf_size": 4096, 00:19:14.206 "enable_recv_pipe": true, 00:19:14.206 "enable_quickack": false, 00:19:14.206 "enable_placement_id": 0, 00:19:14.206 "enable_zerocopy_send_server": true, 00:19:14.206 "enable_zerocopy_send_client": false, 00:19:14.206 "zerocopy_threshold": 0, 00:19:14.206 "tls_version": 0, 00:19:14.206 "enable_ktls": false 00:19:14.206 } 00:19:14.206 }, 00:19:14.206 { 00:19:14.206 "method": "sock_impl_set_options", 00:19:14.206 "params": { 00:19:14.206 "impl_name": "posix", 00:19:14.206 "recv_buf_size": 2097152, 00:19:14.206 "send_buf_size": 2097152, 00:19:14.206 "enable_recv_pipe": true, 00:19:14.206 "enable_quickack": false, 00:19:14.206 "enable_placement_id": 0, 00:19:14.206 "enable_zerocopy_send_server": true, 00:19:14.206 "enable_zerocopy_send_client": false, 00:19:14.206 "zerocopy_threshold": 0, 00:19:14.206 "tls_version": 0, 00:19:14.206 "enable_ktls": false 00:19:14.206 } 00:19:14.206 } 00:19:14.206 ] 00:19:14.206 }, 00:19:14.206 { 00:19:14.206 "subsystem": "vmd", 00:19:14.206 "config": [] 00:19:14.206 }, 00:19:14.206 { 00:19:14.206 "subsystem": "accel", 00:19:14.206 "config": [ 00:19:14.206 { 00:19:14.206 "method": "accel_set_options", 00:19:14.206 "params": { 00:19:14.206 "small_cache_size": 128, 00:19:14.206 "large_cache_size": 16, 00:19:14.206 "task_count": 2048, 00:19:14.206 "sequence_count": 2048, 00:19:14.206 "buf_count": 2048 00:19:14.206 } 00:19:14.206 } 00:19:14.206 ] 00:19:14.206 }, 00:19:14.206 { 00:19:14.206 "subsystem": "bdev", 00:19:14.206 "config": [ 00:19:14.206 { 00:19:14.206 "method": "bdev_set_options", 00:19:14.206 "params": { 00:19:14.206 "bdev_io_pool_size": 65535, 00:19:14.206 "bdev_io_cache_size": 256, 00:19:14.206 "bdev_auto_examine": true, 00:19:14.206 "iobuf_small_cache_size": 128, 00:19:14.206 "iobuf_large_cache_size": 16 00:19:14.206 } 00:19:14.206 }, 00:19:14.206 { 00:19:14.206 "method": "bdev_raid_set_options", 00:19:14.206 "params": { 00:19:14.206 "process_window_size_kb": 1024, 00:19:14.206 "process_max_bandwidth_mb_sec": 0 00:19:14.206 } 00:19:14.206 }, 00:19:14.206 { 00:19:14.206 "method": "bdev_iscsi_set_options", 00:19:14.206 "params": { 00:19:14.206 "timeout_sec": 30 00:19:14.206 } 00:19:14.206 }, 00:19:14.206 { 00:19:14.206 "method": "bdev_nvme_set_options", 00:19:14.206 "params": { 00:19:14.206 "action_on_timeout": "none", 00:19:14.206 "timeout_us": 0, 00:19:14.206 "timeout_admin_us": 0, 00:19:14.206 "keep_alive_timeout_ms": 10000, 00:19:14.206 "arbitration_burst": 0, 00:19:14.206 "low_priority_weight": 0, 00:19:14.206 "medium_priority_weight": 0, 00:19:14.206 "high_priority_weight": 0, 00:19:14.206 "nvme_adminq_poll_period_us": 10000, 00:19:14.206 "nvme_ioq_poll_period_us": 0, 00:19:14.206 "io_queue_requests": 512, 00:19:14.206 "delay_cmd_submit": true, 00:19:14.206 "transport_retry_count": 4, 00:19:14.206 "bdev_retry_count": 3, 00:19:14.206 "transport_ack_timeout": 0, 00:19:14.206 "ctrlr_loss_timeout_sec": 0, 00:19:14.206 "reconnect_delay_sec": 0, 00:19:14.206 "fast_io_fail_timeout_sec": 0, 00:19:14.206 "disable_auto_failback": false, 00:19:14.206 "generate_uuids": false, 00:19:14.206 "transport_tos": 0, 00:19:14.206 "nvme_error_stat": false, 00:19:14.206 "rdma_srq_size": 0, 00:19:14.206 "io_path_stat": false, 00:19:14.206 "allow_accel_sequence": false, 00:19:14.206 "rdma_max_cq_size": 0, 00:19:14.206 "rdma_cm_event_timeout_ms": 0, 00:19:14.206 "dhchap_digests": [ 00:19:14.206 "sha256", 00:19:14.206 "sha384", 00:19:14.206 "sha512" 00:19:14.206 ], 00:19:14.206 "dhchap_dhgroups": [ 00:19:14.206 "null", 00:19:14.206 "ffdhe2048", 00:19:14.206 "ffdhe3072", 00:19:14.206 "ffdhe4096", 00:19:14.206 "ffdhe6144", 00:19:14.206 "ffdhe8192" 00:19:14.206 ] 00:19:14.206 } 00:19:14.206 }, 00:19:14.206 { 00:19:14.206 "method": "bdev_nvme_attach_controller", 00:19:14.206 "params": { 00:19:14.206 "name": "TLSTEST", 00:19:14.206 "trtype": "TCP", 00:19:14.206 "adrfam": "IPv4", 00:19:14.206 "traddr": "10.0.0.2", 00:19:14.206 "trsvcid": "4420", 00:19:14.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.206 "prchk_reftag": false, 00:19:14.206 "prchk_guard": false, 00:19:14.206 "ctrlr_loss_timeout_sec": 0, 00:19:14.206 "reconnect_delay_sec": 0, 00:19:14.206 "fast_io_fail_timeout_sec": 0, 00:19:14.207 "psk": "key0", 00:19:14.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.207 "hdgst": false, 00:19:14.207 "ddgst": false, 00:19:14.207 "multipath": "multipath" 00:19:14.207 } 00:19:14.207 }, 00:19:14.207 { 00:19:14.207 "method": "bdev_nvme_set_hotplug", 00:19:14.207 "params": { 00:19:14.207 "period_us": 100000, 00:19:14.207 "enable": false 00:19:14.207 } 00:19:14.207 }, 00:19:14.207 { 00:19:14.207 "method": "bdev_wait_for_examine" 00:19:14.207 } 00:19:14.207 ] 00:19:14.207 }, 00:19:14.207 { 00:19:14.207 "subsystem": "nbd", 00:19:14.207 "config": [] 00:19:14.207 } 00:19:14.207 ] 00:19:14.207 }' 00:19:14.207 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2019140 00:19:14.207 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2019140 ']' 00:19:14.207 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2019140 00:19:14.207 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:14.207 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.207 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2019140 00:19:14.207 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:14.207 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:14.207 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2019140' 00:19:14.207 killing process with pid 2019140 00:19:14.207 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2019140 00:19:14.207 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.207 00:19:14.207 Latency(us) 00:19:14.207 [2024-12-09T14:52:09.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.207 [2024-12-09T14:52:09.435Z] =================================================================================================================== 00:19:14.207 [2024-12-09T14:52:09.435Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:14.207 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2019140 00:19:14.465 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2018782 00:19:14.465 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2018782 ']' 00:19:14.465 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2018782 00:19:14.465 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:14.465 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.466 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2018782 00:19:14.466 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:14.466 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:14.466 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2018782' 00:19:14.466 killing process with pid 2018782 00:19:14.466 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2018782 00:19:14.466 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2018782 00:19:14.725 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:14.725 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:14.725 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:14.725 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:14.725 "subsystems": [ 00:19:14.725 { 00:19:14.725 "subsystem": "keyring", 00:19:14.725 "config": [ 00:19:14.725 { 00:19:14.725 "method": "keyring_file_add_key", 00:19:14.725 "params": { 00:19:14.725 "name": "key0", 00:19:14.725 "path": "/tmp/tmp.HheesVvW1O" 00:19:14.725 } 00:19:14.725 } 00:19:14.725 ] 00:19:14.725 }, 00:19:14.725 { 00:19:14.725 "subsystem": "iobuf", 00:19:14.725 "config": [ 00:19:14.725 { 00:19:14.725 "method": "iobuf_set_options", 00:19:14.725 "params": { 00:19:14.725 "small_pool_count": 8192, 00:19:14.725 "large_pool_count": 1024, 00:19:14.725 "small_bufsize": 8192, 00:19:14.725 "large_bufsize": 135168, 00:19:14.725 "enable_numa": false 00:19:14.725 } 00:19:14.725 } 00:19:14.725 ] 00:19:14.725 }, 00:19:14.725 { 00:19:14.725 "subsystem": "sock", 00:19:14.725 "config": [ 00:19:14.725 { 00:19:14.725 "method": "sock_set_default_impl", 00:19:14.725 "params": { 00:19:14.725 "impl_name": "posix" 00:19:14.725 } 00:19:14.725 }, 00:19:14.725 { 00:19:14.725 "method": "sock_impl_set_options", 00:19:14.725 "params": { 00:19:14.725 "impl_name": "ssl", 00:19:14.725 "recv_buf_size": 4096, 00:19:14.725 "send_buf_size": 4096, 00:19:14.725 "enable_recv_pipe": true, 00:19:14.725 "enable_quickack": false, 00:19:14.725 "enable_placement_id": 0, 00:19:14.725 "enable_zerocopy_send_server": true, 00:19:14.725 "enable_zerocopy_send_client": false, 00:19:14.725 "zerocopy_threshold": 0, 00:19:14.725 "tls_version": 0, 00:19:14.725 "enable_ktls": false 00:19:14.725 } 00:19:14.725 }, 00:19:14.725 { 00:19:14.725 "method": "sock_impl_set_options", 00:19:14.725 "params": { 00:19:14.725 "impl_name": "posix", 00:19:14.725 "recv_buf_size": 2097152, 00:19:14.725 "send_buf_size": 2097152, 00:19:14.725 "enable_recv_pipe": true, 00:19:14.725 "enable_quickack": false, 00:19:14.725 "enable_placement_id": 0, 00:19:14.725 "enable_zerocopy_send_server": true, 00:19:14.725 "enable_zerocopy_send_client": false, 00:19:14.725 "zerocopy_threshold": 0, 00:19:14.725 "tls_version": 0, 00:19:14.725 "enable_ktls": false 00:19:14.725 } 00:19:14.725 } 00:19:14.725 ] 00:19:14.725 }, 00:19:14.725 { 00:19:14.725 "subsystem": "vmd", 00:19:14.725 "config": [] 00:19:14.725 }, 00:19:14.725 { 00:19:14.725 "subsystem": "accel", 00:19:14.725 "config": [ 00:19:14.725 { 00:19:14.725 "method": "accel_set_options", 00:19:14.725 "params": { 00:19:14.725 "small_cache_size": 128, 00:19:14.725 "large_cache_size": 16, 00:19:14.725 "task_count": 2048, 00:19:14.725 "sequence_count": 2048, 00:19:14.725 "buf_count": 2048 00:19:14.725 } 00:19:14.725 } 00:19:14.725 ] 00:19:14.725 }, 00:19:14.725 { 00:19:14.725 "subsystem": "bdev", 00:19:14.725 "config": [ 00:19:14.725 { 00:19:14.725 "method": "bdev_set_options", 00:19:14.725 "params": { 00:19:14.725 "bdev_io_pool_size": 65535, 00:19:14.725 "bdev_io_cache_size": 256, 00:19:14.725 "bdev_auto_examine": true, 00:19:14.725 "iobuf_small_cache_size": 128, 00:19:14.725 "iobuf_large_cache_size": 16 00:19:14.725 } 00:19:14.725 }, 00:19:14.725 { 00:19:14.725 "method": "bdev_raid_set_options", 00:19:14.725 "params": { 00:19:14.725 "process_window_size_kb": 1024, 00:19:14.725 "process_max_bandwidth_mb_sec": 0 00:19:14.725 } 00:19:14.725 }, 00:19:14.725 { 00:19:14.725 "method": "bdev_iscsi_set_options", 00:19:14.725 "params": { 00:19:14.725 "timeout_sec": 30 00:19:14.725 } 00:19:14.725 }, 00:19:14.725 { 00:19:14.725 "method": "bdev_nvme_set_options", 00:19:14.725 "params": { 00:19:14.725 "action_on_timeout": "none", 00:19:14.725 "timeout_us": 0, 00:19:14.725 "timeout_admin_us": 0, 00:19:14.725 "keep_alive_timeout_ms": 10000, 00:19:14.725 "arbitration_burst": 0, 00:19:14.725 "low_priority_weight": 0, 00:19:14.725 "medium_priority_weight": 0, 00:19:14.725 "high_priority_weight": 0, 00:19:14.725 "nvme_adminq_poll_period_us": 10000, 00:19:14.725 "nvme_ioq_poll_period_us": 0, 00:19:14.725 "io_queue_requests": 0, 00:19:14.725 "delay_cmd_submit": true, 00:19:14.725 "transport_retry_count": 4, 00:19:14.725 "bdev_retry_count": 3, 00:19:14.725 "transport_ack_timeout": 0, 00:19:14.725 "ctrlr_loss_timeout_sec": 0, 00:19:14.725 "reconnect_delay_sec": 0, 00:19:14.725 "fast_io_fail_timeout_sec": 0, 00:19:14.725 "disable_auto_failback": false, 00:19:14.725 "generate_uuids": false, 00:19:14.725 "transport_tos": 0, 00:19:14.725 "nvme_error_stat": false, 00:19:14.725 "rdma_srq_size": 0, 00:19:14.725 "io_path_stat": false, 00:19:14.725 "allow_accel_sequence": false, 00:19:14.725 "rdma_max_cq_size": 0, 00:19:14.725 "rdma_cm_event_timeout_ms": 0, 00:19:14.725 "dhchap_digests": [ 00:19:14.725 "sha256", 00:19:14.725 "sha384", 00:19:14.725 "sha512" 00:19:14.725 ], 00:19:14.725 "dhchap_dhgroups": [ 00:19:14.725 "null", 00:19:14.725 "ffdhe2048", 00:19:14.725 "ffdhe3072", 00:19:14.725 "ffdhe4096", 00:19:14.725 "ffdhe6144", 00:19:14.725 "ffdhe8192" 00:19:14.725 ] 00:19:14.725 } 00:19:14.725 }, 00:19:14.725 { 00:19:14.725 "method": "bdev_nvme_set_hotplug", 00:19:14.725 "params": { 00:19:14.725 "period_us": 100000, 00:19:14.725 "enable": false 00:19:14.725 } 00:19:14.725 }, 00:19:14.725 { 00:19:14.725 "method": "bdev_malloc_create", 00:19:14.725 "params": { 00:19:14.725 "name": "malloc0", 00:19:14.725 "num_blocks": 8192, 00:19:14.725 "block_size": 4096, 00:19:14.725 "physical_block_size": 4096, 00:19:14.726 "uuid": "6cae84a4-53e1-42ec-8475-d1ac843273c2", 00:19:14.726 "optimal_io_boundary": 0, 00:19:14.726 "md_size": 0, 00:19:14.726 "dif_type": 0, 00:19:14.726 "dif_is_head_of_md": false, 00:19:14.726 "dif_pi_format": 0 00:19:14.726 } 00:19:14.726 }, 00:19:14.726 { 00:19:14.726 "method": "bdev_wait_for_examine" 00:19:14.726 } 00:19:14.726 ] 00:19:14.726 }, 00:19:14.726 { 00:19:14.726 "subsystem": "nbd", 00:19:14.726 "config": [] 00:19:14.726 }, 00:19:14.726 { 00:19:14.726 "subsystem": "scheduler", 00:19:14.726 "config": [ 00:19:14.726 { 00:19:14.726 "method": "framework_set_scheduler", 00:19:14.726 "params": { 00:19:14.726 "name": "static" 00:19:14.726 } 00:19:14.726 } 00:19:14.726 ] 00:19:14.726 }, 00:19:14.726 { 00:19:14.726 "subsystem": "nvmf", 00:19:14.726 "config": [ 00:19:14.726 { 00:19:14.726 "method": "nvmf_set_config", 00:19:14.726 "params": { 00:19:14.726 "discovery_filter": "match_any", 00:19:14.726 "admin_cmd_passthru": { 00:19:14.726 "identify_ctrlr": false 00:19:14.726 }, 00:19:14.726 "dhchap_digests": [ 00:19:14.726 "sha256", 00:19:14.726 "sha384", 00:19:14.726 "sha512" 00:19:14.726 ], 00:19:14.726 "dhchap_dhgroups": [ 00:19:14.726 "null", 00:19:14.726 "ffdhe2048", 00:19:14.726 "ffdhe3072", 00:19:14.726 "ffdhe4096", 00:19:14.726 "ffdhe6144", 00:19:14.726 "ffdhe8192" 00:19:14.726 ] 00:19:14.726 } 00:19:14.726 }, 00:19:14.726 { 00:19:14.726 "method": "nvmf_set_max_subsystems", 00:19:14.726 "params": { 00:19:14.726 "max_subsystems": 1024 00:19:14.726 } 00:19:14.726 }, 00:19:14.726 { 00:19:14.726 "method": "nvmf_set_crdt", 00:19:14.726 "params": { 00:19:14.726 "crdt1": 0, 00:19:14.726 "crdt2": 0, 00:19:14.726 "crdt3": 0 00:19:14.726 } 00:19:14.726 }, 00:19:14.726 { 00:19:14.726 "method": "nvmf_create_transport", 00:19:14.726 "params": { 00:19:14.726 "trtype": "TCP", 00:19:14.726 "max_queue_depth": 128, 00:19:14.726 "max_io_qpairs_per_ctrlr": 127, 00:19:14.726 "in_capsule_data_size": 4096, 00:19:14.726 "max_io_size": 131072, 00:19:14.726 "io_unit_size": 131072, 00:19:14.726 "max_aq_depth": 128, 00:19:14.726 "num_shared_buffers": 511, 00:19:14.726 "buf_cache_size": 4294967295, 00:19:14.726 "dif_insert_or_strip": false, 00:19:14.726 "zcopy": false, 00:19:14.726 "c2h_success": false, 00:19:14.726 "sock_priority": 0, 00:19:14.726 "abort_timeout_sec": 1, 00:19:14.726 "ack_timeout": 0, 00:19:14.726 "data_wr_pool_size": 0 00:19:14.726 } 00:19:14.726 }, 00:19:14.726 { 00:19:14.726 "method": "nvmf_create_subsystem", 00:19:14.726 "params": { 00:19:14.726 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.726 "allow_any_host": false, 00:19:14.726 "serial_number": "SPDK00000000000001", 00:19:14.726 "model_number": "SPDK bdev Controller", 00:19:14.726 "max_namespaces": 10, 00:19:14.726 "min_cntlid": 1, 00:19:14.726 "max_cntlid": 65519, 00:19:14.726 "ana_reporting": false 00:19:14.726 } 00:19:14.726 }, 00:19:14.726 { 00:19:14.726 "method": "nvmf_subsystem_add_host", 00:19:14.726 "params": { 00:19:14.726 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.726 "host": "nqn.2016-06.io.spdk:host1", 00:19:14.726 "psk": "key0" 00:19:14.726 } 00:19:14.726 }, 00:19:14.726 { 00:19:14.726 "method": "nvmf_subsystem_add_ns", 00:19:14.726 "params": { 00:19:14.726 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.726 "namespace": { 00:19:14.726 "nsid": 1, 00:19:14.726 "bdev_name": "malloc0", 00:19:14.726 "nguid": "6CAE84A453E142EC8475D1AC843273C2", 00:19:14.726 "uuid": "6cae84a4-53e1-42ec-8475-d1ac843273c2", 00:19:14.726 "no_auto_visible": false 00:19:14.726 } 00:19:14.726 } 00:19:14.726 }, 00:19:14.726 { 00:19:14.726 "method": "nvmf_subsystem_add_listener", 00:19:14.726 "params": { 00:19:14.726 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.726 "listen_address": { 00:19:14.726 "trtype": "TCP", 00:19:14.726 "adrfam": "IPv4", 00:19:14.726 "traddr": "10.0.0.2", 00:19:14.726 "trsvcid": "4420" 00:19:14.726 }, 00:19:14.726 "secure_channel": true 00:19:14.726 } 00:19:14.726 } 00:19:14.726 ] 00:19:14.726 } 00:19:14.726 ] 00:19:14.726 }' 00:19:14.726 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.726 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2019475 00:19:14.726 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:14.726 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2019475 00:19:14.726 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2019475 ']' 00:19:14.726 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.726 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.726 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.726 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.726 15:52:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.726 [2024-12-09 15:52:09.796262] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:19:14.726 [2024-12-09 15:52:09.796314] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.726 [2024-12-09 15:52:09.876620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.726 [2024-12-09 15:52:09.910725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.726 [2024-12-09 15:52:09.910761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.726 [2024-12-09 15:52:09.910768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.726 [2024-12-09 15:52:09.910774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.726 [2024-12-09 15:52:09.910779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.726 [2024-12-09 15:52:09.911300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.985 [2024-12-09 15:52:10.125334] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.985 [2024-12-09 15:52:10.157353] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:14.985 [2024-12-09 15:52:10.157548] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.552 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.552 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:15.552 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:15.552 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:15.552 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.552 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.552 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2019518 00:19:15.552 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2019518 /var/tmp/bdevperf.sock 00:19:15.552 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2019518 ']' 00:19:15.552 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.552 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:15.552 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.552 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.552 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:15.552 "subsystems": [ 00:19:15.552 { 00:19:15.552 "subsystem": "keyring", 00:19:15.552 "config": [ 00:19:15.552 { 00:19:15.552 "method": "keyring_file_add_key", 00:19:15.552 "params": { 00:19:15.552 "name": "key0", 00:19:15.552 "path": "/tmp/tmp.HheesVvW1O" 00:19:15.552 } 00:19:15.552 } 00:19:15.552 ] 00:19:15.552 }, 00:19:15.552 { 00:19:15.552 "subsystem": "iobuf", 00:19:15.552 "config": [ 00:19:15.552 { 00:19:15.552 "method": "iobuf_set_options", 00:19:15.552 "params": { 00:19:15.552 "small_pool_count": 8192, 00:19:15.552 "large_pool_count": 1024, 00:19:15.552 "small_bufsize": 8192, 00:19:15.552 "large_bufsize": 135168, 00:19:15.552 "enable_numa": false 00:19:15.552 } 00:19:15.552 } 00:19:15.552 ] 00:19:15.552 }, 00:19:15.552 { 00:19:15.552 "subsystem": "sock", 00:19:15.552 "config": [ 00:19:15.552 { 00:19:15.552 "method": "sock_set_default_impl", 00:19:15.552 "params": { 00:19:15.552 "impl_name": "posix" 00:19:15.552 } 00:19:15.552 }, 00:19:15.552 { 00:19:15.552 "method": "sock_impl_set_options", 00:19:15.552 "params": { 00:19:15.552 "impl_name": "ssl", 00:19:15.552 "recv_buf_size": 4096, 00:19:15.552 "send_buf_size": 4096, 00:19:15.552 "enable_recv_pipe": true, 00:19:15.552 "enable_quickack": false, 00:19:15.552 "enable_placement_id": 0, 00:19:15.552 "enable_zerocopy_send_server": true, 00:19:15.552 "enable_zerocopy_send_client": false, 00:19:15.552 "zerocopy_threshold": 0, 00:19:15.552 "tls_version": 0, 00:19:15.552 "enable_ktls": false 00:19:15.552 } 00:19:15.552 }, 00:19:15.552 { 00:19:15.552 "method": "sock_impl_set_options", 00:19:15.552 "params": { 00:19:15.552 "impl_name": "posix", 00:19:15.552 "recv_buf_size": 2097152, 00:19:15.552 "send_buf_size": 2097152, 00:19:15.552 "enable_recv_pipe": true, 00:19:15.552 "enable_quickack": false, 00:19:15.552 "enable_placement_id": 0, 00:19:15.552 "enable_zerocopy_send_server": true, 00:19:15.552 "enable_zerocopy_send_client": false, 00:19:15.552 "zerocopy_threshold": 0, 00:19:15.552 "tls_version": 0, 00:19:15.552 "enable_ktls": false 00:19:15.552 } 00:19:15.552 } 00:19:15.552 ] 00:19:15.552 }, 00:19:15.552 { 00:19:15.552 "subsystem": "vmd", 00:19:15.552 "config": [] 00:19:15.552 }, 00:19:15.552 { 00:19:15.552 "subsystem": "accel", 00:19:15.552 "config": [ 00:19:15.552 { 00:19:15.552 "method": "accel_set_options", 00:19:15.552 "params": { 00:19:15.552 "small_cache_size": 128, 00:19:15.552 "large_cache_size": 16, 00:19:15.552 "task_count": 2048, 00:19:15.552 "sequence_count": 2048, 00:19:15.552 "buf_count": 2048 00:19:15.552 } 00:19:15.552 } 00:19:15.552 ] 00:19:15.552 }, 00:19:15.552 { 00:19:15.552 "subsystem": "bdev", 00:19:15.552 "config": [ 00:19:15.552 { 00:19:15.552 "method": "bdev_set_options", 00:19:15.552 "params": { 00:19:15.552 "bdev_io_pool_size": 65535, 00:19:15.552 "bdev_io_cache_size": 256, 00:19:15.552 "bdev_auto_examine": true, 00:19:15.552 "iobuf_small_cache_size": 128, 00:19:15.552 "iobuf_large_cache_size": 16 00:19:15.552 } 00:19:15.552 }, 00:19:15.552 { 00:19:15.552 "method": "bdev_raid_set_options", 00:19:15.552 "params": { 00:19:15.553 "process_window_size_kb": 1024, 00:19:15.553 "process_max_bandwidth_mb_sec": 0 00:19:15.553 } 00:19:15.553 }, 00:19:15.553 { 00:19:15.553 "method": "bdev_iscsi_set_options", 00:19:15.553 "params": { 00:19:15.553 "timeout_sec": 30 00:19:15.553 } 00:19:15.553 }, 00:19:15.553 { 00:19:15.553 "method": "bdev_nvme_set_options", 00:19:15.553 "params": { 00:19:15.553 "action_on_timeout": "none", 00:19:15.553 "timeout_us": 0, 00:19:15.553 "timeout_admin_us": 0, 00:19:15.553 "keep_alive_timeout_ms": 10000, 00:19:15.553 "arbitration_burst": 0, 00:19:15.553 "low_priority_weight": 0, 00:19:15.553 "medium_priority_weight": 0, 00:19:15.553 "high_priority_weight": 0, 00:19:15.553 "nvme_adminq_poll_period_us": 10000, 00:19:15.553 "nvme_ioq_poll_period_us": 0, 00:19:15.553 "io_queue_requests": 512, 00:19:15.553 "delay_cmd_submit": true, 00:19:15.553 "transport_retry_count": 4, 00:19:15.553 "bdev_retry_count": 3, 00:19:15.553 "transport_ack_timeout": 0, 00:19:15.553 "ctrlr_loss_timeout_sec": 0, 00:19:15.553 "reconnect_delay_sec": 0, 00:19:15.553 "fast_io_fail_timeout_sec": 0, 00:19:15.553 "disable_auto_failback": false, 00:19:15.553 "generate_uuids": false, 00:19:15.553 "transport_tos": 0, 00:19:15.553 "nvme_error_stat": false, 00:19:15.553 "rdma_srq_size": 0, 00:19:15.553 "io_path_stat": false, 00:19:15.553 "allow_accel_sequence": false, 00:19:15.553 "rdma_max_cq_size": 0, 00:19:15.553 "rdma_cm_event_timeout_ms": 0, 00:19:15.553 "dhchap_digests": [ 00:19:15.553 "sha256", 00:19:15.553 "sha384", 00:19:15.553 "sha512" 00:19:15.553 ], 00:19:15.553 "dhchap_dhgroups": [ 00:19:15.553 "null", 00:19:15.553 "ffdhe2048", 00:19:15.553 "ffdhe3072", 00:19:15.553 "ffdhe4096", 00:19:15.553 "ffdhe6144", 00:19:15.553 "ffdhe8192" 00:19:15.553 ] 00:19:15.553 } 00:19:15.553 }, 00:19:15.553 { 00:19:15.553 "method": "bdev_nvme_attach_controller", 00:19:15.553 "params": { 00:19:15.553 "name": "TLSTEST", 00:19:15.553 "trtype": "TCP", 00:19:15.553 "adrfam": "IPv4", 00:19:15.553 "traddr": "10.0.0.2", 00:19:15.553 "trsvcid": "4420", 00:19:15.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.553 "prchk_reftag": false, 00:19:15.553 "prchk_guard": false, 00:19:15.553 "ctrlr_loss_timeout_sec": 0, 00:19:15.553 "reconnect_delay_sec": 0, 00:19:15.553 "fast_io_fail_timeout_sec": 0, 00:19:15.553 "psk": "key0", 00:19:15.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:15.553 "hdgst": false, 00:19:15.553 "ddgst": false, 00:19:15.553 "multipath": "multipath" 00:19:15.553 } 00:19:15.553 }, 00:19:15.553 { 00:19:15.553 "method": "bdev_nvme_set_hotplug", 00:19:15.553 "params": { 00:19:15.553 "period_us": 100000, 00:19:15.553 "enable": false 00:19:15.553 } 00:19:15.553 }, 00:19:15.553 { 00:19:15.553 "method": "bdev_wait_for_examine" 00:19:15.553 } 00:19:15.553 ] 00:19:15.553 }, 00:19:15.553 { 00:19:15.553 "subsystem": "nbd", 00:19:15.553 "config": [] 00:19:15.553 } 00:19:15.553 ] 00:19:15.553 }' 00:19:15.553 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.553 15:52:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.553 [2024-12-09 15:52:10.714536] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:19:15.553 [2024-12-09 15:52:10.714581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2019518 ] 00:19:15.811 [2024-12-09 15:52:10.788193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.812 [2024-12-09 15:52:10.827011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.812 [2024-12-09 15:52:10.980100] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.378 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.378 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:16.378 15:52:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:16.637 Running I/O for 10 seconds... 00:19:18.507 5503.00 IOPS, 21.50 MiB/s [2024-12-09T14:52:14.670Z] 5510.00 IOPS, 21.52 MiB/s [2024-12-09T14:52:16.060Z] 5584.67 IOPS, 21.82 MiB/s [2024-12-09T14:52:16.995Z] 5561.00 IOPS, 21.72 MiB/s [2024-12-09T14:52:17.931Z] 5590.60 IOPS, 21.84 MiB/s [2024-12-09T14:52:18.866Z] 5576.50 IOPS, 21.78 MiB/s [2024-12-09T14:52:19.801Z] 5574.71 IOPS, 21.78 MiB/s [2024-12-09T14:52:20.736Z] 5554.12 IOPS, 21.70 MiB/s [2024-12-09T14:52:22.112Z] 5495.56 IOPS, 21.47 MiB/s [2024-12-09T14:52:22.112Z] 5438.00 IOPS, 21.24 MiB/s 00:19:26.884 Latency(us) 00:19:26.884 [2024-12-09T14:52:22.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.884 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:26.884 Verification LBA range: start 0x0 length 0x2000 00:19:26.884 TLSTESTn1 : 10.02 5437.09 21.24 0.00 0.00 23500.77 5991.86 33204.91 00:19:26.884 [2024-12-09T14:52:22.112Z] =================================================================================================================== 00:19:26.884 [2024-12-09T14:52:22.112Z] Total : 5437.09 21.24 0.00 0.00 23500.77 5991.86 33204.91 00:19:26.884 { 00:19:26.884 "results": [ 00:19:26.884 { 00:19:26.884 "job": "TLSTESTn1", 00:19:26.884 "core_mask": "0x4", 00:19:26.884 "workload": "verify", 00:19:26.884 "status": "finished", 00:19:26.884 "verify_range": { 00:19:26.884 "start": 0, 00:19:26.884 "length": 8192 00:19:26.884 }, 00:19:26.884 "queue_depth": 128, 00:19:26.884 "io_size": 4096, 00:19:26.884 "runtime": 10.024854, 00:19:26.884 "iops": 5437.086664803298, 00:19:26.884 "mibps": 21.238619784387883, 00:19:26.884 "io_failed": 0, 00:19:26.884 "io_timeout": 0, 00:19:26.884 "avg_latency_us": 23500.770245093157, 00:19:26.884 "min_latency_us": 5991.862857142857, 00:19:26.884 "max_latency_us": 33204.90666666667 00:19:26.884 } 00:19:26.884 ], 00:19:26.884 "core_count": 1 00:19:26.884 } 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2019518 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2019518 ']' 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2019518 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2019518 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2019518' 00:19:26.884 killing process with pid 2019518 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2019518 00:19:26.884 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.884 00:19:26.884 Latency(us) 00:19:26.884 [2024-12-09T14:52:22.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.884 [2024-12-09T14:52:22.112Z] =================================================================================================================== 00:19:26.884 [2024-12-09T14:52:22.112Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2019518 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2019475 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2019475 ']' 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2019475 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2019475 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2019475' 00:19:26.884 killing process with pid 2019475 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2019475 00:19:26.884 15:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2019475 00:19:27.143 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:27.143 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:27.143 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:27.143 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.143 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2021458 00:19:27.143 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2021458 00:19:27.143 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:27.143 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2021458 ']' 00:19:27.143 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.143 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.143 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.143 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.143 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.143 [2024-12-09 15:52:22.211334] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:19:27.143 [2024-12-09 15:52:22.211383] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.143 [2024-12-09 15:52:22.288913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.143 [2024-12-09 15:52:22.327654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.144 [2024-12-09 15:52:22.327691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.144 [2024-12-09 15:52:22.327698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.144 [2024-12-09 15:52:22.327704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.144 [2024-12-09 15:52:22.327708] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.144 [2024-12-09 15:52:22.328230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.403 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.403 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:27.403 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:27.403 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:27.403 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.403 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.403 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.HheesVvW1O 00:19:27.403 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.HheesVvW1O 00:19:27.403 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:27.403 [2024-12-09 15:52:22.628598] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.661 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:27.661 15:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:27.920 [2024-12-09 15:52:23.001554] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:27.920 [2024-12-09 15:52:23.001737] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.920 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:28.178 malloc0 00:19:28.178 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:28.436 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.HheesVvW1O 00:19:28.436 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.695 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2021794 00:19:28.695 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:28.695 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:28.695 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2021794 /var/tmp/bdevperf.sock 00:19:28.695 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2021794 ']' 00:19:28.695 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.695 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.695 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.695 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.695 15:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.695 [2024-12-09 15:52:23.885362] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:19:28.695 [2024-12-09 15:52:23.885411] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2021794 ] 00:19:28.954 [2024-12-09 15:52:23.960940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.954 [2024-12-09 15:52:24.000045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.954 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.954 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:28.954 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HheesVvW1O 00:19:29.212 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:29.470 [2024-12-09 15:52:24.459943] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:29.470 nvme0n1 00:19:29.470 15:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:29.470 Running I/O for 1 seconds... 00:19:30.847 4788.00 IOPS, 18.70 MiB/s 00:19:30.847 Latency(us) 00:19:30.847 [2024-12-09T14:52:26.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.847 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:30.847 Verification LBA range: start 0x0 length 0x2000 00:19:30.847 nvme0n1 : 1.02 4833.89 18.88 0.00 0.00 26290.97 6272.73 25715.08 00:19:30.847 [2024-12-09T14:52:26.075Z] =================================================================================================================== 00:19:30.847 [2024-12-09T14:52:26.075Z] Total : 4833.89 18.88 0.00 0.00 26290.97 6272.73 25715.08 00:19:30.847 { 00:19:30.847 "results": [ 00:19:30.847 { 00:19:30.847 "job": "nvme0n1", 00:19:30.847 "core_mask": "0x2", 00:19:30.847 "workload": "verify", 00:19:30.847 "status": "finished", 00:19:30.847 "verify_range": { 00:19:30.847 "start": 0, 00:19:30.847 "length": 8192 00:19:30.847 }, 00:19:30.847 "queue_depth": 128, 00:19:30.847 "io_size": 4096, 00:19:30.847 "runtime": 1.016987, 00:19:30.847 "iops": 4833.88676551421, 00:19:30.847 "mibps": 18.882370177789884, 00:19:30.847 "io_failed": 0, 00:19:30.847 "io_timeout": 0, 00:19:30.847 "avg_latency_us": 26290.96849006161, 00:19:30.847 "min_latency_us": 6272.731428571428, 00:19:30.847 "max_latency_us": 25715.078095238096 00:19:30.847 } 00:19:30.847 ], 00:19:30.847 "core_count": 1 00:19:30.847 } 00:19:30.847 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2021794 00:19:30.847 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2021794 ']' 00:19:30.847 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2021794 00:19:30.847 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:30.847 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.847 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2021794 00:19:30.847 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:30.847 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:30.847 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2021794' 00:19:30.847 killing process with pid 2021794 00:19:30.847 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2021794 00:19:30.847 Received shutdown signal, test time was about 1.000000 seconds 00:19:30.847 00:19:30.848 Latency(us) 00:19:30.848 [2024-12-09T14:52:26.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.848 [2024-12-09T14:52:26.076Z] =================================================================================================================== 00:19:30.848 [2024-12-09T14:52:26.076Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.848 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2021794 00:19:30.848 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2021458 00:19:30.848 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2021458 ']' 00:19:30.848 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2021458 00:19:30.848 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:30.848 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.848 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2021458 00:19:30.848 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.848 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.848 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2021458' 00:19:30.848 killing process with pid 2021458 00:19:30.848 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2021458 00:19:30.848 15:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2021458 00:19:31.106 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:31.106 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:31.106 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.106 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.106 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2022048 00:19:31.106 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:31.106 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2022048 00:19:31.106 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2022048 ']' 00:19:31.106 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.106 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.106 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.106 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.106 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.106 [2024-12-09 15:52:26.165835] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:19:31.106 [2024-12-09 15:52:26.165881] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.106 [2024-12-09 15:52:26.245907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.106 [2024-12-09 15:52:26.285923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.106 [2024-12-09 15:52:26.285959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.106 [2024-12-09 15:52:26.285966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.106 [2024-12-09 15:52:26.285972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.106 [2024-12-09 15:52:26.285977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.106 [2024-12-09 15:52:26.286534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.364 [2024-12-09 15:52:26.431393] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.364 malloc0 00:19:31.364 [2024-12-09 15:52:26.459499] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:31.364 [2024-12-09 15:52:26.459689] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2022274 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2022274 /var/tmp/bdevperf.sock 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2022274 ']' 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.364 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.364 [2024-12-09 15:52:26.535886] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:19:31.364 [2024-12-09 15:52:26.535926] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2022274 ] 00:19:31.622 [2024-12-09 15:52:26.609960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.622 [2024-12-09 15:52:26.650389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.622 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.622 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:31.622 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.HheesVvW1O 00:19:31.880 15:52:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:31.880 [2024-12-09 15:52:27.107166] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.138 nvme0n1 00:19:32.138 15:52:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:32.138 Running I/O for 1 seconds... 00:19:33.330 5266.00 IOPS, 20.57 MiB/s 00:19:33.330 Latency(us) 00:19:33.330 [2024-12-09T14:52:28.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.330 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:33.330 Verification LBA range: start 0x0 length 0x2000 00:19:33.330 nvme0n1 : 1.01 5323.30 20.79 0.00 0.00 23880.24 5492.54 51929.48 00:19:33.330 [2024-12-09T14:52:28.558Z] =================================================================================================================== 00:19:33.330 [2024-12-09T14:52:28.558Z] Total : 5323.30 20.79 0.00 0.00 23880.24 5492.54 51929.48 00:19:33.330 { 00:19:33.330 "results": [ 00:19:33.330 { 00:19:33.330 "job": "nvme0n1", 00:19:33.330 "core_mask": "0x2", 00:19:33.330 "workload": "verify", 00:19:33.330 "status": "finished", 00:19:33.330 "verify_range": { 00:19:33.330 "start": 0, 00:19:33.330 "length": 8192 00:19:33.330 }, 00:19:33.330 "queue_depth": 128, 00:19:33.330 "io_size": 4096, 00:19:33.330 "runtime": 1.013282, 00:19:33.330 "iops": 5323.295982757021, 00:19:33.330 "mibps": 20.794124932644614, 00:19:33.330 "io_failed": 0, 00:19:33.330 "io_timeout": 0, 00:19:33.330 "avg_latency_us": 23880.239035259634, 00:19:33.330 "min_latency_us": 5492.540952380952, 00:19:33.330 "max_latency_us": 51929.4780952381 00:19:33.330 } 00:19:33.330 ], 00:19:33.330 "core_count": 1 00:19:33.330 } 00:19:33.330 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:33.330 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.330 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.330 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.330 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:33.330 "subsystems": [ 00:19:33.330 { 00:19:33.330 "subsystem": "keyring", 00:19:33.330 "config": [ 00:19:33.330 { 00:19:33.330 "method": "keyring_file_add_key", 00:19:33.330 "params": { 00:19:33.330 "name": "key0", 00:19:33.330 "path": "/tmp/tmp.HheesVvW1O" 00:19:33.330 } 00:19:33.330 } 00:19:33.330 ] 00:19:33.330 }, 00:19:33.330 { 00:19:33.330 "subsystem": "iobuf", 00:19:33.330 "config": [ 00:19:33.330 { 00:19:33.330 "method": "iobuf_set_options", 00:19:33.330 "params": { 00:19:33.331 "small_pool_count": 8192, 00:19:33.331 "large_pool_count": 1024, 00:19:33.331 "small_bufsize": 8192, 00:19:33.331 "large_bufsize": 135168, 00:19:33.331 "enable_numa": false 00:19:33.331 } 00:19:33.331 } 00:19:33.331 ] 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "subsystem": "sock", 00:19:33.331 "config": [ 00:19:33.331 { 00:19:33.331 "method": "sock_set_default_impl", 00:19:33.331 "params": { 00:19:33.331 "impl_name": "posix" 00:19:33.331 } 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "method": "sock_impl_set_options", 00:19:33.331 "params": { 00:19:33.331 "impl_name": "ssl", 00:19:33.331 "recv_buf_size": 4096, 00:19:33.331 "send_buf_size": 4096, 00:19:33.331 "enable_recv_pipe": true, 00:19:33.331 "enable_quickack": false, 00:19:33.331 "enable_placement_id": 0, 00:19:33.331 "enable_zerocopy_send_server": true, 00:19:33.331 "enable_zerocopy_send_client": false, 00:19:33.331 "zerocopy_threshold": 0, 00:19:33.331 "tls_version": 0, 00:19:33.331 "enable_ktls": false 00:19:33.331 } 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "method": "sock_impl_set_options", 00:19:33.331 "params": { 00:19:33.331 "impl_name": "posix", 00:19:33.331 "recv_buf_size": 2097152, 00:19:33.331 "send_buf_size": 2097152, 00:19:33.331 "enable_recv_pipe": true, 00:19:33.331 "enable_quickack": false, 00:19:33.331 "enable_placement_id": 0, 00:19:33.331 "enable_zerocopy_send_server": true, 00:19:33.331 "enable_zerocopy_send_client": false, 00:19:33.331 "zerocopy_threshold": 0, 00:19:33.331 "tls_version": 0, 00:19:33.331 "enable_ktls": false 00:19:33.331 } 00:19:33.331 } 00:19:33.331 ] 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "subsystem": "vmd", 00:19:33.331 "config": [] 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "subsystem": "accel", 00:19:33.331 "config": [ 00:19:33.331 { 00:19:33.331 "method": "accel_set_options", 00:19:33.331 "params": { 00:19:33.331 "small_cache_size": 128, 00:19:33.331 "large_cache_size": 16, 00:19:33.331 "task_count": 2048, 00:19:33.331 "sequence_count": 2048, 00:19:33.331 "buf_count": 2048 00:19:33.331 } 00:19:33.331 } 00:19:33.331 ] 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "subsystem": "bdev", 00:19:33.331 "config": [ 00:19:33.331 { 00:19:33.331 "method": "bdev_set_options", 00:19:33.331 "params": { 00:19:33.331 "bdev_io_pool_size": 65535, 00:19:33.331 "bdev_io_cache_size": 256, 00:19:33.331 "bdev_auto_examine": true, 00:19:33.331 "iobuf_small_cache_size": 128, 00:19:33.331 "iobuf_large_cache_size": 16 00:19:33.331 } 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "method": "bdev_raid_set_options", 00:19:33.331 "params": { 00:19:33.331 "process_window_size_kb": 1024, 00:19:33.331 "process_max_bandwidth_mb_sec": 0 00:19:33.331 } 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "method": "bdev_iscsi_set_options", 00:19:33.331 "params": { 00:19:33.331 "timeout_sec": 30 00:19:33.331 } 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "method": "bdev_nvme_set_options", 00:19:33.331 "params": { 00:19:33.331 "action_on_timeout": "none", 00:19:33.331 "timeout_us": 0, 00:19:33.331 "timeout_admin_us": 0, 00:19:33.331 "keep_alive_timeout_ms": 10000, 00:19:33.331 "arbitration_burst": 0, 00:19:33.331 "low_priority_weight": 0, 00:19:33.331 "medium_priority_weight": 0, 00:19:33.331 "high_priority_weight": 0, 00:19:33.331 "nvme_adminq_poll_period_us": 10000, 00:19:33.331 "nvme_ioq_poll_period_us": 0, 00:19:33.331 "io_queue_requests": 0, 00:19:33.331 "delay_cmd_submit": true, 00:19:33.331 "transport_retry_count": 4, 00:19:33.331 "bdev_retry_count": 3, 00:19:33.331 "transport_ack_timeout": 0, 00:19:33.331 "ctrlr_loss_timeout_sec": 0, 00:19:33.331 "reconnect_delay_sec": 0, 00:19:33.331 "fast_io_fail_timeout_sec": 0, 00:19:33.331 "disable_auto_failback": false, 00:19:33.331 "generate_uuids": false, 00:19:33.331 "transport_tos": 0, 00:19:33.331 "nvme_error_stat": false, 00:19:33.331 "rdma_srq_size": 0, 00:19:33.331 "io_path_stat": false, 00:19:33.331 "allow_accel_sequence": false, 00:19:33.331 "rdma_max_cq_size": 0, 00:19:33.331 "rdma_cm_event_timeout_ms": 0, 00:19:33.331 "dhchap_digests": [ 00:19:33.331 "sha256", 00:19:33.331 "sha384", 00:19:33.331 "sha512" 00:19:33.331 ], 00:19:33.331 "dhchap_dhgroups": [ 00:19:33.331 "null", 00:19:33.331 "ffdhe2048", 00:19:33.331 "ffdhe3072", 00:19:33.331 "ffdhe4096", 00:19:33.331 "ffdhe6144", 00:19:33.331 "ffdhe8192" 00:19:33.331 ] 00:19:33.331 } 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "method": "bdev_nvme_set_hotplug", 00:19:33.331 "params": { 00:19:33.331 "period_us": 100000, 00:19:33.331 "enable": false 00:19:33.331 } 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "method": "bdev_malloc_create", 00:19:33.331 "params": { 00:19:33.331 "name": "malloc0", 00:19:33.331 "num_blocks": 8192, 00:19:33.331 "block_size": 4096, 00:19:33.331 "physical_block_size": 4096, 00:19:33.331 "uuid": "0be6b4d1-b800-40b7-9786-04fb654dcafc", 00:19:33.331 "optimal_io_boundary": 0, 00:19:33.331 "md_size": 0, 00:19:33.331 "dif_type": 0, 00:19:33.331 "dif_is_head_of_md": false, 00:19:33.331 "dif_pi_format": 0 00:19:33.331 } 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "method": "bdev_wait_for_examine" 00:19:33.331 } 00:19:33.331 ] 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "subsystem": "nbd", 00:19:33.331 "config": [] 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "subsystem": "scheduler", 00:19:33.331 "config": [ 00:19:33.331 { 00:19:33.331 "method": "framework_set_scheduler", 00:19:33.331 "params": { 00:19:33.331 "name": "static" 00:19:33.331 } 00:19:33.331 } 00:19:33.331 ] 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "subsystem": "nvmf", 00:19:33.331 "config": [ 00:19:33.331 { 00:19:33.331 "method": "nvmf_set_config", 00:19:33.331 "params": { 00:19:33.331 "discovery_filter": "match_any", 00:19:33.331 "admin_cmd_passthru": { 00:19:33.331 "identify_ctrlr": false 00:19:33.331 }, 00:19:33.331 "dhchap_digests": [ 00:19:33.331 "sha256", 00:19:33.331 "sha384", 00:19:33.331 "sha512" 00:19:33.331 ], 00:19:33.331 "dhchap_dhgroups": [ 00:19:33.331 "null", 00:19:33.331 "ffdhe2048", 00:19:33.331 "ffdhe3072", 00:19:33.331 "ffdhe4096", 00:19:33.331 "ffdhe6144", 00:19:33.331 "ffdhe8192" 00:19:33.331 ] 00:19:33.331 } 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "method": "nvmf_set_max_subsystems", 00:19:33.331 "params": { 00:19:33.331 "max_subsystems": 1024 00:19:33.331 } 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "method": "nvmf_set_crdt", 00:19:33.331 "params": { 00:19:33.331 "crdt1": 0, 00:19:33.331 "crdt2": 0, 00:19:33.331 "crdt3": 0 00:19:33.331 } 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "method": "nvmf_create_transport", 00:19:33.331 "params": { 00:19:33.331 "trtype": "TCP", 00:19:33.331 "max_queue_depth": 128, 00:19:33.331 "max_io_qpairs_per_ctrlr": 127, 00:19:33.331 "in_capsule_data_size": 4096, 00:19:33.331 "max_io_size": 131072, 00:19:33.331 "io_unit_size": 131072, 00:19:33.331 "max_aq_depth": 128, 00:19:33.331 "num_shared_buffers": 511, 00:19:33.331 "buf_cache_size": 4294967295, 00:19:33.331 "dif_insert_or_strip": false, 00:19:33.331 "zcopy": false, 00:19:33.331 "c2h_success": false, 00:19:33.331 "sock_priority": 0, 00:19:33.331 "abort_timeout_sec": 1, 00:19:33.331 "ack_timeout": 0, 00:19:33.331 "data_wr_pool_size": 0 00:19:33.331 } 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "method": "nvmf_create_subsystem", 00:19:33.331 "params": { 00:19:33.331 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.331 "allow_any_host": false, 00:19:33.331 "serial_number": "00000000000000000000", 00:19:33.331 "model_number": "SPDK bdev Controller", 00:19:33.331 "max_namespaces": 32, 00:19:33.331 "min_cntlid": 1, 00:19:33.331 "max_cntlid": 65519, 00:19:33.331 "ana_reporting": false 00:19:33.331 } 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "method": "nvmf_subsystem_add_host", 00:19:33.331 "params": { 00:19:33.331 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.331 "host": "nqn.2016-06.io.spdk:host1", 00:19:33.331 "psk": "key0" 00:19:33.331 } 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "method": "nvmf_subsystem_add_ns", 00:19:33.331 "params": { 00:19:33.331 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.331 "namespace": { 00:19:33.331 "nsid": 1, 00:19:33.331 "bdev_name": "malloc0", 00:19:33.331 "nguid": "0BE6B4D1B80040B7978604FB654DCAFC", 00:19:33.331 "uuid": "0be6b4d1-b800-40b7-9786-04fb654dcafc", 00:19:33.331 "no_auto_visible": false 00:19:33.331 } 00:19:33.331 } 00:19:33.331 }, 00:19:33.331 { 00:19:33.331 "method": "nvmf_subsystem_add_listener", 00:19:33.331 "params": { 00:19:33.331 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.331 "listen_address": { 00:19:33.331 "trtype": "TCP", 00:19:33.331 "adrfam": "IPv4", 00:19:33.331 "traddr": "10.0.0.2", 00:19:33.331 "trsvcid": "4420" 00:19:33.331 }, 00:19:33.331 "secure_channel": false, 00:19:33.331 "sock_impl": "ssl" 00:19:33.331 } 00:19:33.331 } 00:19:33.331 ] 00:19:33.331 } 00:19:33.331 ] 00:19:33.331 }' 00:19:33.331 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:33.591 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:33.591 "subsystems": [ 00:19:33.591 { 00:19:33.591 "subsystem": "keyring", 00:19:33.591 "config": [ 00:19:33.591 { 00:19:33.591 "method": "keyring_file_add_key", 00:19:33.591 "params": { 00:19:33.591 "name": "key0", 00:19:33.591 "path": "/tmp/tmp.HheesVvW1O" 00:19:33.591 } 00:19:33.591 } 00:19:33.591 ] 00:19:33.591 }, 00:19:33.591 { 00:19:33.591 "subsystem": "iobuf", 00:19:33.591 "config": [ 00:19:33.591 { 00:19:33.591 "method": "iobuf_set_options", 00:19:33.591 "params": { 00:19:33.591 "small_pool_count": 8192, 00:19:33.591 "large_pool_count": 1024, 00:19:33.591 "small_bufsize": 8192, 00:19:33.591 "large_bufsize": 135168, 00:19:33.591 "enable_numa": false 00:19:33.591 } 00:19:33.591 } 00:19:33.591 ] 00:19:33.591 }, 00:19:33.591 { 00:19:33.591 "subsystem": "sock", 00:19:33.591 "config": [ 00:19:33.591 { 00:19:33.591 "method": "sock_set_default_impl", 00:19:33.591 "params": { 00:19:33.591 "impl_name": "posix" 00:19:33.591 } 00:19:33.591 }, 00:19:33.591 { 00:19:33.591 "method": "sock_impl_set_options", 00:19:33.591 "params": { 00:19:33.591 "impl_name": "ssl", 00:19:33.591 "recv_buf_size": 4096, 00:19:33.591 "send_buf_size": 4096, 00:19:33.591 "enable_recv_pipe": true, 00:19:33.591 "enable_quickack": false, 00:19:33.591 "enable_placement_id": 0, 00:19:33.591 "enable_zerocopy_send_server": true, 00:19:33.591 "enable_zerocopy_send_client": false, 00:19:33.591 "zerocopy_threshold": 0, 00:19:33.591 "tls_version": 0, 00:19:33.591 "enable_ktls": false 00:19:33.591 } 00:19:33.591 }, 00:19:33.591 { 00:19:33.591 "method": "sock_impl_set_options", 00:19:33.591 "params": { 00:19:33.591 "impl_name": "posix", 00:19:33.591 "recv_buf_size": 2097152, 00:19:33.591 "send_buf_size": 2097152, 00:19:33.591 "enable_recv_pipe": true, 00:19:33.591 "enable_quickack": false, 00:19:33.591 "enable_placement_id": 0, 00:19:33.591 "enable_zerocopy_send_server": true, 00:19:33.591 "enable_zerocopy_send_client": false, 00:19:33.591 "zerocopy_threshold": 0, 00:19:33.591 "tls_version": 0, 00:19:33.591 "enable_ktls": false 00:19:33.591 } 00:19:33.591 } 00:19:33.591 ] 00:19:33.591 }, 00:19:33.591 { 00:19:33.591 "subsystem": "vmd", 00:19:33.591 "config": [] 00:19:33.591 }, 00:19:33.591 { 00:19:33.591 "subsystem": "accel", 00:19:33.591 "config": [ 00:19:33.591 { 00:19:33.591 "method": "accel_set_options", 00:19:33.591 "params": { 00:19:33.591 "small_cache_size": 128, 00:19:33.591 "large_cache_size": 16, 00:19:33.591 "task_count": 2048, 00:19:33.591 "sequence_count": 2048, 00:19:33.591 "buf_count": 2048 00:19:33.591 } 00:19:33.591 } 00:19:33.591 ] 00:19:33.591 }, 00:19:33.591 { 00:19:33.591 "subsystem": "bdev", 00:19:33.591 "config": [ 00:19:33.591 { 00:19:33.591 "method": "bdev_set_options", 00:19:33.591 "params": { 00:19:33.591 "bdev_io_pool_size": 65535, 00:19:33.591 "bdev_io_cache_size": 256, 00:19:33.591 "bdev_auto_examine": true, 00:19:33.591 "iobuf_small_cache_size": 128, 00:19:33.591 "iobuf_large_cache_size": 16 00:19:33.591 } 00:19:33.591 }, 00:19:33.591 { 00:19:33.591 "method": "bdev_raid_set_options", 00:19:33.591 "params": { 00:19:33.591 "process_window_size_kb": 1024, 00:19:33.591 "process_max_bandwidth_mb_sec": 0 00:19:33.591 } 00:19:33.591 }, 00:19:33.591 { 00:19:33.591 "method": "bdev_iscsi_set_options", 00:19:33.591 "params": { 00:19:33.591 "timeout_sec": 30 00:19:33.591 } 00:19:33.591 }, 00:19:33.591 { 00:19:33.591 "method": "bdev_nvme_set_options", 00:19:33.591 "params": { 00:19:33.591 "action_on_timeout": "none", 00:19:33.591 "timeout_us": 0, 00:19:33.591 "timeout_admin_us": 0, 00:19:33.591 "keep_alive_timeout_ms": 10000, 00:19:33.591 "arbitration_burst": 0, 00:19:33.591 "low_priority_weight": 0, 00:19:33.591 "medium_priority_weight": 0, 00:19:33.591 "high_priority_weight": 0, 00:19:33.591 "nvme_adminq_poll_period_us": 10000, 00:19:33.591 "nvme_ioq_poll_period_us": 0, 00:19:33.591 "io_queue_requests": 512, 00:19:33.591 "delay_cmd_submit": true, 00:19:33.591 "transport_retry_count": 4, 00:19:33.591 "bdev_retry_count": 3, 00:19:33.591 "transport_ack_timeout": 0, 00:19:33.591 "ctrlr_loss_timeout_sec": 0, 00:19:33.591 "reconnect_delay_sec": 0, 00:19:33.591 "fast_io_fail_timeout_sec": 0, 00:19:33.591 "disable_auto_failback": false, 00:19:33.591 "generate_uuids": false, 00:19:33.591 "transport_tos": 0, 00:19:33.591 "nvme_error_stat": false, 00:19:33.591 "rdma_srq_size": 0, 00:19:33.591 "io_path_stat": false, 00:19:33.591 "allow_accel_sequence": false, 00:19:33.591 "rdma_max_cq_size": 0, 00:19:33.591 "rdma_cm_event_timeout_ms": 0, 00:19:33.591 "dhchap_digests": [ 00:19:33.591 "sha256", 00:19:33.591 "sha384", 00:19:33.591 "sha512" 00:19:33.591 ], 00:19:33.591 "dhchap_dhgroups": [ 00:19:33.591 "null", 00:19:33.591 "ffdhe2048", 00:19:33.591 "ffdhe3072", 00:19:33.591 "ffdhe4096", 00:19:33.591 "ffdhe6144", 00:19:33.591 "ffdhe8192" 00:19:33.591 ] 00:19:33.591 } 00:19:33.591 }, 00:19:33.591 { 00:19:33.591 "method": "bdev_nvme_attach_controller", 00:19:33.591 "params": { 00:19:33.591 "name": "nvme0", 00:19:33.591 "trtype": "TCP", 00:19:33.591 "adrfam": "IPv4", 00:19:33.591 "traddr": "10.0.0.2", 00:19:33.591 "trsvcid": "4420", 00:19:33.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.591 "prchk_reftag": false, 00:19:33.591 "prchk_guard": false, 00:19:33.591 "ctrlr_loss_timeout_sec": 0, 00:19:33.591 "reconnect_delay_sec": 0, 00:19:33.591 "fast_io_fail_timeout_sec": 0, 00:19:33.591 "psk": "key0", 00:19:33.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.592 "hdgst": false, 00:19:33.592 "ddgst": false, 00:19:33.592 "multipath": "multipath" 00:19:33.592 } 00:19:33.592 }, 00:19:33.592 { 00:19:33.592 "method": "bdev_nvme_set_hotplug", 00:19:33.592 "params": { 00:19:33.592 "period_us": 100000, 00:19:33.592 "enable": false 00:19:33.592 } 00:19:33.592 }, 00:19:33.592 { 00:19:33.592 "method": "bdev_enable_histogram", 00:19:33.592 "params": { 00:19:33.592 "name": "nvme0n1", 00:19:33.592 "enable": true 00:19:33.592 } 00:19:33.592 }, 00:19:33.592 { 00:19:33.592 "method": "bdev_wait_for_examine" 00:19:33.592 } 00:19:33.592 ] 00:19:33.592 }, 00:19:33.592 { 00:19:33.592 "subsystem": "nbd", 00:19:33.592 "config": [] 00:19:33.592 } 00:19:33.592 ] 00:19:33.592 }' 00:19:33.592 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2022274 00:19:33.592 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2022274 ']' 00:19:33.592 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2022274 00:19:33.592 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:33.592 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.592 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2022274 00:19:33.592 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:33.592 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:33.592 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2022274' 00:19:33.592 killing process with pid 2022274 00:19:33.592 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2022274 00:19:33.592 Received shutdown signal, test time was about 1.000000 seconds 00:19:33.592 00:19:33.592 Latency(us) 00:19:33.592 [2024-12-09T14:52:28.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.592 [2024-12-09T14:52:28.820Z] =================================================================================================================== 00:19:33.592 [2024-12-09T14:52:28.820Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:33.592 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2022274 00:19:33.851 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2022048 00:19:33.851 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2022048 ']' 00:19:33.851 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2022048 00:19:33.851 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:33.851 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.851 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2022048 00:19:33.851 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:33.851 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:33.851 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2022048' 00:19:33.851 killing process with pid 2022048 00:19:33.851 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2022048 00:19:33.851 15:52:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2022048 00:19:34.110 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:34.110 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:34.110 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:34.110 "subsystems": [ 00:19:34.110 { 00:19:34.110 "subsystem": "keyring", 00:19:34.110 "config": [ 00:19:34.110 { 00:19:34.110 "method": "keyring_file_add_key", 00:19:34.110 "params": { 00:19:34.110 "name": "key0", 00:19:34.110 "path": "/tmp/tmp.HheesVvW1O" 00:19:34.110 } 00:19:34.110 } 00:19:34.110 ] 00:19:34.110 }, 00:19:34.110 { 00:19:34.110 "subsystem": "iobuf", 00:19:34.110 "config": [ 00:19:34.110 { 00:19:34.110 "method": "iobuf_set_options", 00:19:34.110 "params": { 00:19:34.110 "small_pool_count": 8192, 00:19:34.110 "large_pool_count": 1024, 00:19:34.110 "small_bufsize": 8192, 00:19:34.110 "large_bufsize": 135168, 00:19:34.110 "enable_numa": false 00:19:34.110 } 00:19:34.110 } 00:19:34.110 ] 00:19:34.110 }, 00:19:34.110 { 00:19:34.110 "subsystem": "sock", 00:19:34.110 "config": [ 00:19:34.110 { 00:19:34.110 "method": "sock_set_default_impl", 00:19:34.110 "params": { 00:19:34.110 "impl_name": "posix" 00:19:34.110 } 00:19:34.110 }, 00:19:34.110 { 00:19:34.110 "method": "sock_impl_set_options", 00:19:34.110 "params": { 00:19:34.110 "impl_name": "ssl", 00:19:34.110 "recv_buf_size": 4096, 00:19:34.110 "send_buf_size": 4096, 00:19:34.110 "enable_recv_pipe": true, 00:19:34.110 "enable_quickack": false, 00:19:34.110 "enable_placement_id": 0, 00:19:34.110 "enable_zerocopy_send_server": true, 00:19:34.110 "enable_zerocopy_send_client": false, 00:19:34.110 "zerocopy_threshold": 0, 00:19:34.110 "tls_version": 0, 00:19:34.110 "enable_ktls": false 00:19:34.110 } 00:19:34.110 }, 00:19:34.110 { 00:19:34.110 "method": "sock_impl_set_options", 00:19:34.110 "params": { 00:19:34.110 "impl_name": "posix", 00:19:34.110 "recv_buf_size": 2097152, 00:19:34.110 "send_buf_size": 2097152, 00:19:34.110 "enable_recv_pipe": true, 00:19:34.110 "enable_quickack": false, 00:19:34.110 "enable_placement_id": 0, 00:19:34.110 "enable_zerocopy_send_server": true, 00:19:34.110 "enable_zerocopy_send_client": false, 00:19:34.110 "zerocopy_threshold": 0, 00:19:34.110 "tls_version": 0, 00:19:34.110 "enable_ktls": false 00:19:34.110 } 00:19:34.110 } 00:19:34.110 ] 00:19:34.110 }, 00:19:34.110 { 00:19:34.110 "subsystem": "vmd", 00:19:34.110 "config": [] 00:19:34.110 }, 00:19:34.110 { 00:19:34.110 "subsystem": "accel", 00:19:34.110 "config": [ 00:19:34.110 { 00:19:34.110 "method": "accel_set_options", 00:19:34.110 "params": { 00:19:34.110 "small_cache_size": 128, 00:19:34.110 "large_cache_size": 16, 00:19:34.110 "task_count": 2048, 00:19:34.110 "sequence_count": 2048, 00:19:34.110 "buf_count": 2048 00:19:34.110 } 00:19:34.110 } 00:19:34.110 ] 00:19:34.110 }, 00:19:34.110 { 00:19:34.110 "subsystem": "bdev", 00:19:34.110 "config": [ 00:19:34.110 { 00:19:34.110 "method": "bdev_set_options", 00:19:34.110 "params": { 00:19:34.110 "bdev_io_pool_size": 65535, 00:19:34.110 "bdev_io_cache_size": 256, 00:19:34.110 "bdev_auto_examine": true, 00:19:34.110 "iobuf_small_cache_size": 128, 00:19:34.110 "iobuf_large_cache_size": 16 00:19:34.110 } 00:19:34.110 }, 00:19:34.110 { 00:19:34.110 "method": "bdev_raid_set_options", 00:19:34.110 "params": { 00:19:34.110 "process_window_size_kb": 1024, 00:19:34.110 "process_max_bandwidth_mb_sec": 0 00:19:34.110 } 00:19:34.110 }, 00:19:34.110 { 00:19:34.110 "method": "bdev_iscsi_set_options", 00:19:34.110 "params": { 00:19:34.110 "timeout_sec": 30 00:19:34.110 } 00:19:34.110 }, 00:19:34.110 { 00:19:34.110 "method": "bdev_nvme_set_options", 00:19:34.110 "params": { 00:19:34.110 "action_on_timeout": "none", 00:19:34.110 "timeout_us": 0, 00:19:34.110 "timeout_admin_us": 0, 00:19:34.110 "keep_alive_timeout_ms": 10000, 00:19:34.110 "arbitration_burst": 0, 00:19:34.110 "low_priority_weight": 0, 00:19:34.110 "medium_priority_weight": 0, 00:19:34.110 "high_priority_weight": 0, 00:19:34.110 "nvme_adminq_poll_period_us": 10000, 00:19:34.110 "nvme_ioq_poll_period_us": 0, 00:19:34.110 "io_queue_requests": 0, 00:19:34.110 "delay_cmd_submit": true, 00:19:34.110 "transport_retry_count": 4, 00:19:34.110 "bdev_retry_count": 3, 00:19:34.110 "transport_ack_timeout": 0, 00:19:34.110 "ctrlr_loss_timeout_sec": 0, 00:19:34.110 "reconnect_delay_sec": 0, 00:19:34.110 "fast_io_fail_timeout_sec": 0, 00:19:34.110 "disable_auto_failback": false, 00:19:34.110 "generate_uuids": false, 00:19:34.110 "transport_tos": 0, 00:19:34.110 "nvme_error_stat": false, 00:19:34.110 "rdma_srq_size": 0, 00:19:34.111 "io_path_stat": false, 00:19:34.111 "allow_accel_sequence": false, 00:19:34.111 "rdma_max_cq_size": 0, 00:19:34.111 "rdma_cm_event_timeout_ms": 0, 00:19:34.111 "dhchap_digests": [ 00:19:34.111 "sha256", 00:19:34.111 "sha384", 00:19:34.111 "sha512" 00:19:34.111 ], 00:19:34.111 "dhchap_dhgroups": [ 00:19:34.111 "null", 00:19:34.111 "ffdhe2048", 00:19:34.111 "ffdhe3072", 00:19:34.111 "ffdhe4096", 00:19:34.111 "ffdhe6144", 00:19:34.111 "ffdhe8192" 00:19:34.111 ] 00:19:34.111 } 00:19:34.111 }, 00:19:34.111 { 00:19:34.111 "method": "bdev_nvme_set_hotplug", 00:19:34.111 "params": { 00:19:34.111 "period_us": 100000, 00:19:34.111 "enable": false 00:19:34.111 } 00:19:34.111 }, 00:19:34.111 { 00:19:34.111 "method": "bdev_malloc_create", 00:19:34.111 "params": { 00:19:34.111 "name": "malloc0", 00:19:34.111 "num_blocks": 8192, 00:19:34.111 "block_size": 4096, 00:19:34.111 "physical_block_size": 4096, 00:19:34.111 "uuid": "0be6b4d1-b800-40b7-9786-04fb654dcafc", 00:19:34.111 "optimal_io_boundary": 0, 00:19:34.111 "md_size": 0, 00:19:34.111 "dif_type": 0, 00:19:34.111 "dif_is_head_of_md": false, 00:19:34.111 "dif_pi_format": 0 00:19:34.111 } 00:19:34.111 }, 00:19:34.111 { 00:19:34.111 "method": "bdev_wait_for_examine" 00:19:34.111 } 00:19:34.111 ] 00:19:34.111 }, 00:19:34.111 { 00:19:34.111 "subsystem": "nbd", 00:19:34.111 "config": [] 00:19:34.111 }, 00:19:34.111 { 00:19:34.111 "subsystem": "scheduler", 00:19:34.111 "config": [ 00:19:34.111 { 00:19:34.111 "method": "framework_set_scheduler", 00:19:34.111 "params": { 00:19:34.111 "name": "static" 00:19:34.111 } 00:19:34.111 } 00:19:34.111 ] 00:19:34.111 }, 00:19:34.111 { 00:19:34.111 "subsystem": "nvmf", 00:19:34.111 "config": [ 00:19:34.111 { 00:19:34.111 "method": "nvmf_set_config", 00:19:34.111 "params": { 00:19:34.111 "discovery_filter": "match_any", 00:19:34.111 "admin_cmd_passthru": { 00:19:34.111 "identify_ctrlr": false 00:19:34.111 }, 00:19:34.111 "dhchap_digests": [ 00:19:34.111 "sha256", 00:19:34.111 "sha384", 00:19:34.111 "sha512" 00:19:34.111 ], 00:19:34.111 "dhchap_dhgroups": [ 00:19:34.111 "null", 00:19:34.111 "ffdhe2048", 00:19:34.111 "ffdhe3072", 00:19:34.111 "ffdhe4096", 00:19:34.111 "ffdhe6144", 00:19:34.111 "ffdhe8192" 00:19:34.111 ] 00:19:34.111 } 00:19:34.111 }, 00:19:34.111 { 00:19:34.111 "method": "nvmf_set_max_subsystems", 00:19:34.111 "params": { 00:19:34.111 "max_subsystems": 1024 00:19:34.111 } 00:19:34.111 }, 00:19:34.111 { 00:19:34.111 "method": "nvmf_set_crdt", 00:19:34.111 "params": { 00:19:34.111 "crdt1": 0, 00:19:34.111 "crdt2": 0, 00:19:34.111 "crdt3": 0 00:19:34.111 } 00:19:34.111 }, 00:19:34.111 { 00:19:34.111 "method": "nvmf_create_transport", 00:19:34.111 "params": { 00:19:34.111 "trtype": "TCP", 00:19:34.111 "max_queue_depth": 128, 00:19:34.111 "max_io_qpairs_per_ctrlr": 127, 00:19:34.111 "in_capsule_data_size": 4096, 00:19:34.111 "max_io_size": 131072, 00:19:34.111 "io_unit_size": 131072, 00:19:34.111 "max_aq_depth": 128, 00:19:34.111 "num_shared_buffers": 511, 00:19:34.111 "buf_cache_size": 4294967295, 00:19:34.111 "dif_insert_or_strip": false, 00:19:34.111 "zcopy": false, 00:19:34.111 "c2h_success": false, 00:19:34.111 "sock_priority": 0, 00:19:34.111 "abort_timeout_sec": 1, 00:19:34.111 "ack_timeout": 0, 00:19:34.111 "data_wr_pool_size": 0 00:19:34.111 } 00:19:34.111 }, 00:19:34.111 { 00:19:34.111 "method": "nvmf_create_subsystem", 00:19:34.111 "params": { 00:19:34.111 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.111 "allow_any_host": false, 00:19:34.111 "serial_number": "00000000000000000000", 00:19:34.111 "model_number": "SPDK bdev Controller", 00:19:34.111 "max_namespaces": 32, 00:19:34.111 "min_cntlid": 1, 00:19:34.111 "max_cntlid": 65519, 00:19:34.111 "ana_reporting": false 00:19:34.111 } 00:19:34.111 }, 00:19:34.111 { 00:19:34.111 "method": "nvmf_subsystem_add_host", 00:19:34.111 "params": { 00:19:34.111 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.111 "host": "nqn.2016-06.io.spdk:host1", 00:19:34.111 "psk": "key0" 00:19:34.111 } 00:19:34.111 }, 00:19:34.111 { 00:19:34.111 "method": "nvmf_subsystem_add_ns", 00:19:34.111 "params": { 00:19:34.111 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.111 "namespace": { 00:19:34.111 "nsid": 1, 00:19:34.111 "bdev_name": "malloc0", 00:19:34.111 "nguid": "0BE6B4D1B80040B7978604FB654DCAFC", 00:19:34.111 "uuid": "0be6b4d1-b800-40b7-9786-04fb654dcafc", 00:19:34.111 "no_auto_visible": false 00:19:34.111 } 00:19:34.111 } 00:19:34.111 }, 00:19:34.111 { 00:19:34.111 "method": "nvmf_subsystem_add_listener", 00:19:34.111 "params": { 00:19:34.111 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.111 "listen_address": { 00:19:34.111 "trtype": "TCP", 00:19:34.111 "adrfam": "IPv4", 00:19:34.111 "traddr": "10.0.0.2", 00:19:34.111 "trsvcid": "4420" 00:19:34.111 }, 00:19:34.111 "secure_channel": false, 00:19:34.111 "sock_impl": "ssl" 00:19:34.111 } 00:19:34.111 } 00:19:34.111 ] 00:19:34.111 } 00:19:34.111 ] 00:19:34.111 }' 00:19:34.111 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:34.111 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.111 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2022644 00:19:34.111 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:34.111 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2022644 00:19:34.111 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2022644 ']' 00:19:34.111 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.111 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.111 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.111 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.111 15:52:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.111 [2024-12-09 15:52:29.178961] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:19:34.111 [2024-12-09 15:52:29.179011] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.111 [2024-12-09 15:52:29.257870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.111 [2024-12-09 15:52:29.294254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.111 [2024-12-09 15:52:29.294301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.111 [2024-12-09 15:52:29.294308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.111 [2024-12-09 15:52:29.294314] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.111 [2024-12-09 15:52:29.294319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.111 [2024-12-09 15:52:29.294883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.370 [2024-12-09 15:52:29.506171] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.370 [2024-12-09 15:52:29.538206] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:34.370 [2024-12-09 15:52:29.538396] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.938 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.938 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:34.938 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:34.938 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:34.938 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.938 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.938 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2022776 00:19:34.938 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2022776 /var/tmp/bdevperf.sock 00:19:34.938 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2022776 ']' 00:19:34.938 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:34.938 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.938 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.938 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:34.938 "subsystems": [ 00:19:34.938 { 00:19:34.938 "subsystem": "keyring", 00:19:34.938 "config": [ 00:19:34.938 { 00:19:34.938 "method": "keyring_file_add_key", 00:19:34.938 "params": { 00:19:34.938 "name": "key0", 00:19:34.938 "path": "/tmp/tmp.HheesVvW1O" 00:19:34.938 } 00:19:34.938 } 00:19:34.938 ] 00:19:34.938 }, 00:19:34.938 { 00:19:34.938 "subsystem": "iobuf", 00:19:34.938 "config": [ 00:19:34.938 { 00:19:34.938 "method": "iobuf_set_options", 00:19:34.938 "params": { 00:19:34.938 "small_pool_count": 8192, 00:19:34.938 "large_pool_count": 1024, 00:19:34.938 "small_bufsize": 8192, 00:19:34.938 "large_bufsize": 135168, 00:19:34.938 "enable_numa": false 00:19:34.938 } 00:19:34.938 } 00:19:34.938 ] 00:19:34.938 }, 00:19:34.938 { 00:19:34.938 "subsystem": "sock", 00:19:34.938 "config": [ 00:19:34.938 { 00:19:34.938 "method": "sock_set_default_impl", 00:19:34.938 "params": { 00:19:34.938 "impl_name": "posix" 00:19:34.938 } 00:19:34.938 }, 00:19:34.938 { 00:19:34.938 "method": "sock_impl_set_options", 00:19:34.938 "params": { 00:19:34.938 "impl_name": "ssl", 00:19:34.938 "recv_buf_size": 4096, 00:19:34.938 "send_buf_size": 4096, 00:19:34.938 "enable_recv_pipe": true, 00:19:34.938 "enable_quickack": false, 00:19:34.938 "enable_placement_id": 0, 00:19:34.938 "enable_zerocopy_send_server": true, 00:19:34.938 "enable_zerocopy_send_client": false, 00:19:34.938 "zerocopy_threshold": 0, 00:19:34.938 "tls_version": 0, 00:19:34.938 "enable_ktls": false 00:19:34.938 } 00:19:34.938 }, 00:19:34.938 { 00:19:34.938 "method": "sock_impl_set_options", 00:19:34.938 "params": { 00:19:34.938 "impl_name": "posix", 00:19:34.938 "recv_buf_size": 2097152, 00:19:34.938 "send_buf_size": 2097152, 00:19:34.938 "enable_recv_pipe": true, 00:19:34.938 "enable_quickack": false, 00:19:34.938 "enable_placement_id": 0, 00:19:34.938 "enable_zerocopy_send_server": true, 00:19:34.938 "enable_zerocopy_send_client": false, 00:19:34.938 "zerocopy_threshold": 0, 00:19:34.938 "tls_version": 0, 00:19:34.938 "enable_ktls": false 00:19:34.938 } 00:19:34.938 } 00:19:34.938 ] 00:19:34.938 }, 00:19:34.938 { 00:19:34.938 "subsystem": "vmd", 00:19:34.938 "config": [] 00:19:34.938 }, 00:19:34.938 { 00:19:34.938 "subsystem": "accel", 00:19:34.938 "config": [ 00:19:34.938 { 00:19:34.938 "method": "accel_set_options", 00:19:34.938 "params": { 00:19:34.938 "small_cache_size": 128, 00:19:34.938 "large_cache_size": 16, 00:19:34.938 "task_count": 2048, 00:19:34.938 "sequence_count": 2048, 00:19:34.938 "buf_count": 2048 00:19:34.938 } 00:19:34.938 } 00:19:34.938 ] 00:19:34.938 }, 00:19:34.938 { 00:19:34.938 "subsystem": "bdev", 00:19:34.938 "config": [ 00:19:34.938 { 00:19:34.938 "method": "bdev_set_options", 00:19:34.938 "params": { 00:19:34.938 "bdev_io_pool_size": 65535, 00:19:34.938 "bdev_io_cache_size": 256, 00:19:34.938 "bdev_auto_examine": true, 00:19:34.938 "iobuf_small_cache_size": 128, 00:19:34.938 "iobuf_large_cache_size": 16 00:19:34.938 } 00:19:34.938 }, 00:19:34.938 { 00:19:34.938 "method": "bdev_raid_set_options", 00:19:34.938 "params": { 00:19:34.938 "process_window_size_kb": 1024, 00:19:34.938 "process_max_bandwidth_mb_sec": 0 00:19:34.938 } 00:19:34.938 }, 00:19:34.938 { 00:19:34.938 "method": "bdev_iscsi_set_options", 00:19:34.938 "params": { 00:19:34.938 "timeout_sec": 30 00:19:34.938 } 00:19:34.938 }, 00:19:34.938 { 00:19:34.938 "method": "bdev_nvme_set_options", 00:19:34.938 "params": { 00:19:34.938 "action_on_timeout": "none", 00:19:34.938 "timeout_us": 0, 00:19:34.938 "timeout_admin_us": 0, 00:19:34.938 "keep_alive_timeout_ms": 10000, 00:19:34.938 "arbitration_burst": 0, 00:19:34.938 "low_priority_weight": 0, 00:19:34.938 "medium_priority_weight": 0, 00:19:34.938 "high_priority_weight": 0, 00:19:34.938 "nvme_adminq_poll_period_us": 10000, 00:19:34.938 "nvme_ioq_poll_period_us": 0, 00:19:34.938 "io_queue_requests": 512, 00:19:34.938 "delay_cmd_submit": true, 00:19:34.938 "transport_retry_count": 4, 00:19:34.938 "bdev_retry_count": 3, 00:19:34.938 "transport_ack_timeout": 0, 00:19:34.938 "ctrlr_loss_timeout_sec": 0, 00:19:34.938 "reconnect_delay_sec": 0, 00:19:34.938 "fast_io_fail_timeout_sec": 0, 00:19:34.938 "disable_auto_failback": false, 00:19:34.939 "generate_uuids": false, 00:19:34.939 "transport_tos": 0, 00:19:34.939 "nvme_error_stat": false, 00:19:34.939 "rdma_srq_size": 0, 00:19:34.939 "io_path_stat": false, 00:19:34.939 "allow_accel_sequence": false, 00:19:34.939 "rdma_max_cq_size": 0, 00:19:34.939 "rdma_cm_event_timeout_ms": 0, 00:19:34.939 "dhchap_digests": [ 00:19:34.939 "sha256", 00:19:34.939 "sha384", 00:19:34.939 "sha512" 00:19:34.939 ], 00:19:34.939 "dhchap_dhgroups": [ 00:19:34.939 "null", 00:19:34.939 "ffdhe2048", 00:19:34.939 "ffdhe3072", 00:19:34.939 "ffdhe4096", 00:19:34.939 "ffdhe6144", 00:19:34.939 "ffdhe8192" 00:19:34.939 ] 00:19:34.939 } 00:19:34.939 }, 00:19:34.939 { 00:19:34.939 "method": "bdev_nvme_attach_controller", 00:19:34.939 "params": { 00:19:34.939 "name": "nvme0", 00:19:34.939 "trtype": "TCP", 00:19:34.939 "adrfam": "IPv4", 00:19:34.939 "traddr": "10.0.0.2", 00:19:34.939 "trsvcid": "4420", 00:19:34.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.939 "prchk_reftag": false, 00:19:34.939 "prchk_guard": false, 00:19:34.939 "ctrlr_loss_timeout_sec": 0, 00:19:34.939 "reconnect_delay_sec": 0, 00:19:34.939 "fast_io_fail_timeout_sec": 0, 00:19:34.939 "psk": "key0", 00:19:34.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.939 "hdgst": false, 00:19:34.939 "ddgst": false, 00:19:34.939 "multipath": "multipath" 00:19:34.939 } 00:19:34.939 }, 00:19:34.939 { 00:19:34.939 "method": "bdev_nvme_set_hotplug", 00:19:34.939 "params": { 00:19:34.939 "period_us": 100000, 00:19:34.939 "enable": false 00:19:34.939 } 00:19:34.939 }, 00:19:34.939 { 00:19:34.939 "method": "bdev_enable_histogram", 00:19:34.939 "params": { 00:19:34.939 "name": "nvme0n1", 00:19:34.939 "enable": true 00:19:34.939 } 00:19:34.939 }, 00:19:34.939 { 00:19:34.939 "method": "bdev_wait_for_examine" 00:19:34.939 } 00:19:34.939 ] 00:19:34.939 }, 00:19:34.939 { 00:19:34.939 "subsystem": "nbd", 00:19:34.939 "config": [] 00:19:34.939 } 00:19:34.939 ] 00:19:34.939 }' 00:19:34.939 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.939 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.939 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.939 [2024-12-09 15:52:30.101596] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:19:34.939 [2024-12-09 15:52:30.101641] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2022776 ] 00:19:35.198 [2024-12-09 15:52:30.177296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.198 [2024-12-09 15:52:30.217865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.198 [2024-12-09 15:52:30.371677] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:35.764 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.764 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:35.764 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:35.764 15:52:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:36.023 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.023 15:52:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:36.023 Running I/O for 1 seconds... 00:19:37.399 5241.00 IOPS, 20.47 MiB/s 00:19:37.399 Latency(us) 00:19:37.399 [2024-12-09T14:52:32.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.399 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:37.399 Verification LBA range: start 0x0 length 0x2000 00:19:37.399 nvme0n1 : 1.01 5302.48 20.71 0.00 0.00 23982.96 5274.09 33704.23 00:19:37.399 [2024-12-09T14:52:32.627Z] =================================================================================================================== 00:19:37.399 [2024-12-09T14:52:32.627Z] Total : 5302.48 20.71 0.00 0.00 23982.96 5274.09 33704.23 00:19:37.399 { 00:19:37.399 "results": [ 00:19:37.399 { 00:19:37.399 "job": "nvme0n1", 00:19:37.399 "core_mask": "0x2", 00:19:37.399 "workload": "verify", 00:19:37.399 "status": "finished", 00:19:37.399 "verify_range": { 00:19:37.399 "start": 0, 00:19:37.399 "length": 8192 00:19:37.399 }, 00:19:37.399 "queue_depth": 128, 00:19:37.399 "io_size": 4096, 00:19:37.399 "runtime": 1.012545, 00:19:37.399 "iops": 5302.48038358789, 00:19:37.399 "mibps": 20.712813998390196, 00:19:37.399 "io_failed": 0, 00:19:37.399 "io_timeout": 0, 00:19:37.399 "avg_latency_us": 23982.95642462461, 00:19:37.399 "min_latency_us": 5274.087619047619, 00:19:37.399 "max_latency_us": 33704.22857142857 00:19:37.399 } 00:19:37.399 ], 00:19:37.399 "core_count": 1 00:19:37.399 } 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:37.400 nvmf_trace.0 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2022776 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2022776 ']' 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2022776 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2022776 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2022776' 00:19:37.400 killing process with pid 2022776 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2022776 00:19:37.400 Received shutdown signal, test time was about 1.000000 seconds 00:19:37.400 00:19:37.400 Latency(us) 00:19:37.400 [2024-12-09T14:52:32.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.400 [2024-12-09T14:52:32.628Z] =================================================================================================================== 00:19:37.400 [2024-12-09T14:52:32.628Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2022776 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:37.400 rmmod nvme_tcp 00:19:37.400 rmmod nvme_fabrics 00:19:37.400 rmmod nvme_keyring 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2022644 ']' 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2022644 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2022644 ']' 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2022644 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:37.400 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2022644 00:19:37.659 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:37.659 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:37.659 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2022644' 00:19:37.659 killing process with pid 2022644 00:19:37.659 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2022644 00:19:37.659 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2022644 00:19:37.659 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:37.659 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:37.659 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:37.659 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:37.659 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:37.659 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:37.659 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:37.659 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:37.659 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:37.659 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.659 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.659 15:52:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.695 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:39.695 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.UqKKm3MUqa /tmp/tmp.YGTVrdWGkn /tmp/tmp.HheesVvW1O 00:19:39.695 00:19:39.695 real 1m19.070s 00:19:39.695 user 2m0.450s 00:19:39.695 sys 0m30.925s 00:19:39.695 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.695 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.695 ************************************ 00:19:39.695 END TEST nvmf_tls 00:19:39.695 ************************************ 00:19:39.958 15:52:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:39.958 15:52:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:39.958 15:52:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.958 15:52:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:39.958 ************************************ 00:19:39.958 START TEST nvmf_fips 00:19:39.958 ************************************ 00:19:39.958 15:52:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:39.958 * Looking for test storage... 00:19:39.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:39.958 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:39.958 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:19:39.958 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:39.958 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:39.958 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:39.958 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:39.958 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:39.958 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.958 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:39.958 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:39.958 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:39.958 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:39.958 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:39.958 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:39.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.959 --rc genhtml_branch_coverage=1 00:19:39.959 --rc genhtml_function_coverage=1 00:19:39.959 --rc genhtml_legend=1 00:19:39.959 --rc geninfo_all_blocks=1 00:19:39.959 --rc geninfo_unexecuted_blocks=1 00:19:39.959 00:19:39.959 ' 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:39.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.959 --rc genhtml_branch_coverage=1 00:19:39.959 --rc genhtml_function_coverage=1 00:19:39.959 --rc genhtml_legend=1 00:19:39.959 --rc geninfo_all_blocks=1 00:19:39.959 --rc geninfo_unexecuted_blocks=1 00:19:39.959 00:19:39.959 ' 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:39.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.959 --rc genhtml_branch_coverage=1 00:19:39.959 --rc genhtml_function_coverage=1 00:19:39.959 --rc genhtml_legend=1 00:19:39.959 --rc geninfo_all_blocks=1 00:19:39.959 --rc geninfo_unexecuted_blocks=1 00:19:39.959 00:19:39.959 ' 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:39.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.959 --rc genhtml_branch_coverage=1 00:19:39.959 --rc genhtml_function_coverage=1 00:19:39.959 --rc genhtml_legend=1 00:19:39.959 --rc geninfo_all_blocks=1 00:19:39.959 --rc geninfo_unexecuted_blocks=1 00:19:39.959 00:19:39.959 ' 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:39.959 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:39.960 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.960 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.960 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.960 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:39.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:39.960 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:39.960 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:39.960 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.221 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:40.222 Error setting digest 00:19:40.222 40822C146E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:40.222 40822C146E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:40.222 15:52:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:46.790 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:46.790 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:46.790 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:46.790 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:46.790 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:46.790 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:46.790 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:46.790 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:46.790 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:46.790 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:46.790 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:46.790 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:46.790 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:46.790 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:46.791 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:46.791 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:46.791 Found net devices under 0000:af:00.0: cvl_0_0 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:46.791 Found net devices under 0000:af:00.1: cvl_0_1 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:46.791 15:52:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:46.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:19:46.791 00:19:46.791 --- 10.0.0.2 ping statistics --- 00:19:46.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.791 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:46.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:19:46.791 00:19:46.791 --- 10.0.0.1 ping statistics --- 00:19:46.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.791 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2026756 00:19:46.791 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2026756 00:19:46.792 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:46.792 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2026756 ']' 00:19:46.792 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.792 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.792 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.792 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.792 15:52:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:46.792 [2024-12-09 15:52:41.236089] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:19:46.792 [2024-12-09 15:52:41.236139] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.792 [2024-12-09 15:52:41.315850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.792 [2024-12-09 15:52:41.355394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.792 [2024-12-09 15:52:41.355429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.792 [2024-12-09 15:52:41.355437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.792 [2024-12-09 15:52:41.355443] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.792 [2024-12-09 15:52:41.355448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.792 [2024-12-09 15:52:41.355967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.051 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.051 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:47.051 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:47.051 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:47.051 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:47.051 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.051 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:47.051 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:47.051 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:47.051 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.tBK 00:19:47.051 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:47.051 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.tBK 00:19:47.051 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.tBK 00:19:47.051 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.tBK 00:19:47.051 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:47.310 [2024-12-09 15:52:42.281633] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.310 [2024-12-09 15:52:42.297636] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:47.310 [2024-12-09 15:52:42.297815] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.310 malloc0 00:19:47.310 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:47.310 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2027005 00:19:47.310 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2027005 /var/tmp/bdevperf.sock 00:19:47.310 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:47.310 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2027005 ']' 00:19:47.310 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.310 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.310 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.310 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.310 15:52:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:47.310 [2024-12-09 15:52:42.423746] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:19:47.310 [2024-12-09 15:52:42.423790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2027005 ] 00:19:47.310 [2024-12-09 15:52:42.499452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.569 [2024-12-09 15:52:42.538329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.135 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.135 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:48.135 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.tBK 00:19:48.394 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.394 [2024-12-09 15:52:43.603855] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:48.652 TLSTESTn1 00:19:48.653 15:52:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:48.653 Running I/O for 10 seconds... 00:19:50.965 5202.00 IOPS, 20.32 MiB/s [2024-12-09T14:52:47.130Z] 5407.50 IOPS, 21.12 MiB/s [2024-12-09T14:52:48.066Z] 5479.00 IOPS, 21.40 MiB/s [2024-12-09T14:52:49.003Z] 5504.25 IOPS, 21.50 MiB/s [2024-12-09T14:52:49.939Z] 5478.20 IOPS, 21.40 MiB/s [2024-12-09T14:52:50.875Z] 5504.50 IOPS, 21.50 MiB/s [2024-12-09T14:52:51.810Z] 5511.14 IOPS, 21.53 MiB/s [2024-12-09T14:52:53.187Z] 5520.50 IOPS, 21.56 MiB/s [2024-12-09T14:52:54.123Z] 5486.11 IOPS, 21.43 MiB/s [2024-12-09T14:52:54.123Z] 5444.70 IOPS, 21.27 MiB/s 00:19:58.895 Latency(us) 00:19:58.895 [2024-12-09T14:52:54.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.895 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:58.895 Verification LBA range: start 0x0 length 0x2000 00:19:58.895 TLSTESTn1 : 10.02 5448.40 21.28 0.00 0.00 23457.84 4993.22 34453.21 00:19:58.895 [2024-12-09T14:52:54.123Z] =================================================================================================================== 00:19:58.895 [2024-12-09T14:52:54.123Z] Total : 5448.40 21.28 0.00 0.00 23457.84 4993.22 34453.21 00:19:58.895 { 00:19:58.895 "results": [ 00:19:58.895 { 00:19:58.895 "job": "TLSTESTn1", 00:19:58.895 "core_mask": "0x4", 00:19:58.895 "workload": "verify", 00:19:58.895 "status": "finished", 00:19:58.895 "verify_range": { 00:19:58.895 "start": 0, 00:19:58.895 "length": 8192 00:19:58.895 }, 00:19:58.895 "queue_depth": 128, 00:19:58.895 "io_size": 4096, 00:19:58.895 "runtime": 10.016523, 00:19:58.895 "iops": 5448.397612624661, 00:19:58.895 "mibps": 21.28280317431508, 00:19:58.895 "io_failed": 0, 00:19:58.895 "io_timeout": 0, 00:19:58.895 "avg_latency_us": 23457.8435442658, 00:19:58.895 "min_latency_us": 4993.219047619048, 00:19:58.895 "max_latency_us": 34453.21142857143 00:19:58.895 } 00:19:58.895 ], 00:19:58.895 "core_count": 1 00:19:58.895 } 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:58.895 nvmf_trace.0 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2027005 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2027005 ']' 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2027005 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2027005 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2027005' 00:19:58.895 killing process with pid 2027005 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2027005 00:19:58.895 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.895 00:19:58.895 Latency(us) 00:19:58.895 [2024-12-09T14:52:54.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.895 [2024-12-09T14:52:54.123Z] =================================================================================================================== 00:19:58.895 [2024-12-09T14:52:54.123Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:58.895 15:52:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2027005 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:59.154 rmmod nvme_tcp 00:19:59.154 rmmod nvme_fabrics 00:19:59.154 rmmod nvme_keyring 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2026756 ']' 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2026756 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2026756 ']' 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2026756 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2026756 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:59.154 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:59.155 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2026756' 00:19:59.155 killing process with pid 2026756 00:19:59.155 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2026756 00:19:59.155 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2026756 00:19:59.413 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:59.413 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:59.413 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:59.413 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:59.413 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:59.413 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:59.413 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:59.413 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:59.413 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:59.413 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.413 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.413 15:52:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.321 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:01.321 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.tBK 00:20:01.321 00:20:01.321 real 0m21.512s 00:20:01.321 user 0m23.298s 00:20:01.321 sys 0m9.561s 00:20:01.321 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.321 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:01.321 ************************************ 00:20:01.321 END TEST nvmf_fips 00:20:01.321 ************************************ 00:20:01.321 15:52:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:01.321 15:52:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:01.321 15:52:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.321 15:52:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:01.581 ************************************ 00:20:01.581 START TEST nvmf_control_msg_list 00:20:01.581 ************************************ 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:01.581 * Looking for test storage... 00:20:01.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:01.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.581 --rc genhtml_branch_coverage=1 00:20:01.581 --rc genhtml_function_coverage=1 00:20:01.581 --rc genhtml_legend=1 00:20:01.581 --rc geninfo_all_blocks=1 00:20:01.581 --rc geninfo_unexecuted_blocks=1 00:20:01.581 00:20:01.581 ' 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:01.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.581 --rc genhtml_branch_coverage=1 00:20:01.581 --rc genhtml_function_coverage=1 00:20:01.581 --rc genhtml_legend=1 00:20:01.581 --rc geninfo_all_blocks=1 00:20:01.581 --rc geninfo_unexecuted_blocks=1 00:20:01.581 00:20:01.581 ' 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:01.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.581 --rc genhtml_branch_coverage=1 00:20:01.581 --rc genhtml_function_coverage=1 00:20:01.581 --rc genhtml_legend=1 00:20:01.581 --rc geninfo_all_blocks=1 00:20:01.581 --rc geninfo_unexecuted_blocks=1 00:20:01.581 00:20:01.581 ' 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:01.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.581 --rc genhtml_branch_coverage=1 00:20:01.581 --rc genhtml_function_coverage=1 00:20:01.581 --rc genhtml_legend=1 00:20:01.581 --rc geninfo_all_blocks=1 00:20:01.581 --rc geninfo_unexecuted_blocks=1 00:20:01.581 00:20:01.581 ' 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.581 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:01.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:01.582 15:52:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:08.151 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:08.151 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:08.151 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:08.152 Found net devices under 0000:af:00.0: cvl_0_0 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:08.152 Found net devices under 0000:af:00.1: cvl_0_1 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:08.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:20:08.152 00:20:08.152 --- 10.0.0.2 ping statistics --- 00:20:08.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.152 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:20:08.152 00:20:08.152 --- 10.0.0.1 ping statistics --- 00:20:08.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.152 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2032321 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2032321 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2032321 ']' 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.152 15:53:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.152 [2024-12-09 15:53:02.757378] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:20:08.152 [2024-12-09 15:53:02.757429] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.152 [2024-12-09 15:53:02.838416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.152 [2024-12-09 15:53:02.879458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.152 [2024-12-09 15:53:02.879490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.152 [2024-12-09 15:53:02.879497] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.152 [2024-12-09 15:53:02.879503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.152 [2024-12-09 15:53:02.879508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.152 [2024-12-09 15:53:02.880037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.411 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.411 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:08.411 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:08.411 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.411 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.411 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.411 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:08.411 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:08.411 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:08.411 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.411 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.411 [2024-12-09 15:53:03.634898] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.669 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.669 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:08.669 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.669 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.669 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.669 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:08.669 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.669 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.670 Malloc0 00:20:08.670 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.670 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:08.670 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.670 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.670 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.670 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:08.670 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.670 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.670 [2024-12-09 15:53:03.687045] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.670 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.670 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2032567 00:20:08.670 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:08.670 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2032568 00:20:08.670 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:08.670 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2032569 00:20:08.670 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:08.670 15:53:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2032567 00:20:08.670 [2024-12-09 15:53:03.777684] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:08.670 [2024-12-09 15:53:03.777874] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:08.670 [2024-12-09 15:53:03.778054] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:10.044 Initializing NVMe Controllers 00:20:10.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:10.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:10.044 Initialization complete. Launching workers. 00:20:10.044 ======================================================== 00:20:10.044 Latency(us) 00:20:10.044 Device Information : IOPS MiB/s Average min max 00:20:10.044 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6512.00 25.44 153.24 128.66 407.13 00:20:10.044 ======================================================== 00:20:10.044 Total : 6512.00 25.44 153.24 128.66 407.13 00:20:10.044 00:20:10.044 Initializing NVMe Controllers 00:20:10.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:10.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:10.044 Initialization complete. Launching workers. 00:20:10.044 ======================================================== 00:20:10.044 Latency(us) 00:20:10.044 Device Information : IOPS MiB/s Average min max 00:20:10.044 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6639.00 25.93 150.28 122.10 364.27 00:20:10.044 ======================================================== 00:20:10.044 Total : 6639.00 25.93 150.28 122.10 364.27 00:20:10.044 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2032568 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2032569 00:20:10.044 Initializing NVMe Controllers 00:20:10.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:10.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:10.044 Initialization complete. Launching workers. 00:20:10.044 ======================================================== 00:20:10.044 Latency(us) 00:20:10.044 Device Information : IOPS MiB/s Average min max 00:20:10.044 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40917.72 40207.18 41821.01 00:20:10.044 ======================================================== 00:20:10.044 Total : 25.00 0.10 40917.72 40207.18 41821.01 00:20:10.044 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:10.044 rmmod nvme_tcp 00:20:10.044 rmmod nvme_fabrics 00:20:10.044 rmmod nvme_keyring 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2032321 ']' 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2032321 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2032321 ']' 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2032321 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2032321 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2032321' 00:20:10.044 killing process with pid 2032321 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2032321 00:20:10.044 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2032321 00:20:10.303 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:10.303 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:10.303 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:10.303 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:10.303 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:10.303 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:10.303 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:10.303 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:10.303 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:10.303 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.303 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:10.303 15:53:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.209 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:12.209 00:20:12.209 real 0m10.869s 00:20:12.209 user 0m7.595s 00:20:12.209 sys 0m5.677s 00:20:12.209 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.209 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:12.209 ************************************ 00:20:12.209 END TEST nvmf_control_msg_list 00:20:12.209 ************************************ 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:12.468 ************************************ 00:20:12.468 START TEST nvmf_wait_for_buf 00:20:12.468 ************************************ 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:12.468 * Looking for test storage... 00:20:12.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:12.468 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:12.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.469 --rc genhtml_branch_coverage=1 00:20:12.469 --rc genhtml_function_coverage=1 00:20:12.469 --rc genhtml_legend=1 00:20:12.469 --rc geninfo_all_blocks=1 00:20:12.469 --rc geninfo_unexecuted_blocks=1 00:20:12.469 00:20:12.469 ' 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:12.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.469 --rc genhtml_branch_coverage=1 00:20:12.469 --rc genhtml_function_coverage=1 00:20:12.469 --rc genhtml_legend=1 00:20:12.469 --rc geninfo_all_blocks=1 00:20:12.469 --rc geninfo_unexecuted_blocks=1 00:20:12.469 00:20:12.469 ' 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:12.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.469 --rc genhtml_branch_coverage=1 00:20:12.469 --rc genhtml_function_coverage=1 00:20:12.469 --rc genhtml_legend=1 00:20:12.469 --rc geninfo_all_blocks=1 00:20:12.469 --rc geninfo_unexecuted_blocks=1 00:20:12.469 00:20:12.469 ' 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:12.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.469 --rc genhtml_branch_coverage=1 00:20:12.469 --rc genhtml_function_coverage=1 00:20:12.469 --rc genhtml_legend=1 00:20:12.469 --rc geninfo_all_blocks=1 00:20:12.469 --rc geninfo_unexecuted_blocks=1 00:20:12.469 00:20:12.469 ' 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:12.469 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.728 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.728 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.728 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.728 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.728 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.728 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:12.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:12.729 15:53:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:19.297 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:19.297 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.297 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:19.298 Found net devices under 0000:af:00.0: cvl_0_0 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:19.298 Found net devices under 0000:af:00.1: cvl_0_1 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:19.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:20:19.298 00:20:19.298 --- 10.0.0.2 ping statistics --- 00:20:19.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.298 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:19.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:20:19.298 00:20:19.298 --- 10.0.0.1 ping statistics --- 00:20:19.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.298 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2036288 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2036288 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2036288 ']' 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.298 [2024-12-09 15:53:13.678343] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:20:19.298 [2024-12-09 15:53:13.678388] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.298 [2024-12-09 15:53:13.763087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.298 [2024-12-09 15:53:13.801982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.298 [2024-12-09 15:53:13.802018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.298 [2024-12-09 15:53:13.802025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.298 [2024-12-09 15:53:13.802031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.298 [2024-12-09 15:53:13.802037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.298 [2024-12-09 15:53:13.802551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.298 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.299 Malloc0 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.299 [2024-12-09 15:53:13.972055] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.299 15:53:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.299 [2024-12-09 15:53:14.000237] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.299 15:53:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.299 15:53:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:19.299 [2024-12-09 15:53:14.078290] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:20.674 Initializing NVMe Controllers 00:20:20.674 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:20.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:20.674 Initialization complete. Launching workers. 00:20:20.674 ======================================================== 00:20:20.674 Latency(us) 00:20:20.674 Device Information : IOPS MiB/s Average min max 00:20:20.674 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.55 16.07 32207.70 7280.67 63845.08 00:20:20.674 ======================================================== 00:20:20.674 Total : 128.55 16.07 32207.70 7280.67 63845.08 00:20:20.674 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:20.674 rmmod nvme_tcp 00:20:20.674 rmmod nvme_fabrics 00:20:20.674 rmmod nvme_keyring 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2036288 ']' 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2036288 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2036288 ']' 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2036288 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2036288 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2036288' 00:20:20.674 killing process with pid 2036288 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2036288 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2036288 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.674 15:53:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.207 15:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:23.207 00:20:23.207 real 0m10.439s 00:20:23.207 user 0m4.025s 00:20:23.207 sys 0m4.827s 00:20:23.207 15:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.207 15:53:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:23.207 ************************************ 00:20:23.207 END TEST nvmf_wait_for_buf 00:20:23.207 ************************************ 00:20:23.207 15:53:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:23.207 15:53:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:23.207 15:53:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:23.208 15:53:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:23.208 15:53:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:23.208 15:53:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:28.482 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:28.483 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:28.483 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:28.483 Found net devices under 0000:af:00.0: cvl_0_0 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:28.483 Found net devices under 0000:af:00.1: cvl_0_1 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.483 ************************************ 00:20:28.483 START TEST nvmf_perf_adq 00:20:28.483 ************************************ 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:28.483 * Looking for test storage... 00:20:28.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:20:28.483 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:28.742 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:28.742 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:28.742 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.742 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.742 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.742 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.742 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:28.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.743 --rc genhtml_branch_coverage=1 00:20:28.743 --rc genhtml_function_coverage=1 00:20:28.743 --rc genhtml_legend=1 00:20:28.743 --rc geninfo_all_blocks=1 00:20:28.743 --rc geninfo_unexecuted_blocks=1 00:20:28.743 00:20:28.743 ' 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:28.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.743 --rc genhtml_branch_coverage=1 00:20:28.743 --rc genhtml_function_coverage=1 00:20:28.743 --rc genhtml_legend=1 00:20:28.743 --rc geninfo_all_blocks=1 00:20:28.743 --rc geninfo_unexecuted_blocks=1 00:20:28.743 00:20:28.743 ' 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:28.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.743 --rc genhtml_branch_coverage=1 00:20:28.743 --rc genhtml_function_coverage=1 00:20:28.743 --rc genhtml_legend=1 00:20:28.743 --rc geninfo_all_blocks=1 00:20:28.743 --rc geninfo_unexecuted_blocks=1 00:20:28.743 00:20:28.743 ' 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:28.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.743 --rc genhtml_branch_coverage=1 00:20:28.743 --rc genhtml_function_coverage=1 00:20:28.743 --rc genhtml_legend=1 00:20:28.743 --rc geninfo_all_blocks=1 00:20:28.743 --rc geninfo_unexecuted_blocks=1 00:20:28.743 00:20:28.743 ' 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:28.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:28.743 15:53:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:35.313 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:35.313 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:35.313 Found net devices under 0000:af:00.0: cvl_0_0 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:35.313 Found net devices under 0000:af:00.1: cvl_0_1 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:35.313 15:53:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:35.572 15:53:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:38.106 15:53:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:43.383 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:43.383 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.383 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:43.384 Found net devices under 0000:af:00.0: cvl_0_0 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:43.384 Found net devices under 0000:af:00.1: cvl_0_1 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:43.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=1.11 ms 00:20:43.384 00:20:43.384 --- 10.0.0.2 ping statistics --- 00:20:43.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.384 rtt min/avg/max/mdev = 1.105/1.105/1.105/0.000 ms 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:20:43.384 00:20:43.384 --- 10.0.0.1 ping statistics --- 00:20:43.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.384 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2044707 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2044707 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2044707 ']' 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.384 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.384 [2024-12-09 15:53:38.493995] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:20:43.384 [2024-12-09 15:53:38.494045] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.384 [2024-12-09 15:53:38.574455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:43.739 [2024-12-09 15:53:38.620921] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.739 [2024-12-09 15:53:38.620953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.739 [2024-12-09 15:53:38.620961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.739 [2024-12-09 15:53:38.620969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.739 [2024-12-09 15:53:38.620974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.739 [2024-12-09 15:53:38.622475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.739 [2024-12-09 15:53:38.622582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.739 [2024-12-09 15:53:38.622693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.739 [2024-12-09 15:53:38.622694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.739 [2024-12-09 15:53:38.824719] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.739 Malloc1 00:20:43.739 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.740 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:43.740 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.740 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.740 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.740 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:43.740 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.740 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.740 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.740 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:43.740 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.740 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.740 [2024-12-09 15:53:38.887186] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.740 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.740 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2044897 00:20:43.740 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:43.740 15:53:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:45.700 15:53:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:45.700 15:53:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.700 15:53:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.700 15:53:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.700 15:53:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:45.700 "tick_rate": 2100000000, 00:20:45.700 "poll_groups": [ 00:20:45.700 { 00:20:45.700 "name": "nvmf_tgt_poll_group_000", 00:20:45.700 "admin_qpairs": 1, 00:20:45.700 "io_qpairs": 1, 00:20:45.700 "current_admin_qpairs": 1, 00:20:45.700 "current_io_qpairs": 1, 00:20:45.700 "pending_bdev_io": 0, 00:20:45.700 "completed_nvme_io": 19865, 00:20:45.700 "transports": [ 00:20:45.700 { 00:20:45.700 "trtype": "TCP" 00:20:45.700 } 00:20:45.700 ] 00:20:45.700 }, 00:20:45.700 { 00:20:45.700 "name": "nvmf_tgt_poll_group_001", 00:20:45.700 "admin_qpairs": 0, 00:20:45.700 "io_qpairs": 1, 00:20:45.700 "current_admin_qpairs": 0, 00:20:45.700 "current_io_qpairs": 1, 00:20:45.700 "pending_bdev_io": 0, 00:20:45.700 "completed_nvme_io": 20316, 00:20:45.700 "transports": [ 00:20:45.700 { 00:20:45.700 "trtype": "TCP" 00:20:45.700 } 00:20:45.700 ] 00:20:45.700 }, 00:20:45.700 { 00:20:45.700 "name": "nvmf_tgt_poll_group_002", 00:20:45.700 "admin_qpairs": 0, 00:20:45.700 "io_qpairs": 1, 00:20:45.700 "current_admin_qpairs": 0, 00:20:45.700 "current_io_qpairs": 1, 00:20:45.700 "pending_bdev_io": 0, 00:20:45.700 "completed_nvme_io": 19899, 00:20:45.700 "transports": [ 00:20:45.700 { 00:20:45.700 "trtype": "TCP" 00:20:45.700 } 00:20:45.700 ] 00:20:45.700 }, 00:20:45.700 { 00:20:45.700 "name": "nvmf_tgt_poll_group_003", 00:20:45.700 "admin_qpairs": 0, 00:20:45.700 "io_qpairs": 1, 00:20:45.700 "current_admin_qpairs": 0, 00:20:45.700 "current_io_qpairs": 1, 00:20:45.700 "pending_bdev_io": 0, 00:20:45.700 "completed_nvme_io": 19607, 00:20:45.700 "transports": [ 00:20:45.700 { 00:20:45.700 "trtype": "TCP" 00:20:45.700 } 00:20:45.700 ] 00:20:45.700 } 00:20:45.700 ] 00:20:45.700 }' 00:20:45.958 15:53:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:45.958 15:53:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:45.958 15:53:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:45.958 15:53:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:45.958 15:53:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2044897 00:20:54.065 Initializing NVMe Controllers 00:20:54.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:54.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:54.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:54.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:54.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:54.065 Initialization complete. Launching workers. 00:20:54.065 ======================================================== 00:20:54.065 Latency(us) 00:20:54.065 Device Information : IOPS MiB/s Average min max 00:20:54.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10481.20 40.94 6106.87 2229.24 10699.14 00:20:54.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10517.20 41.08 6086.71 2439.15 10271.39 00:20:54.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10419.60 40.70 6143.80 1972.49 10710.35 00:20:54.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10363.40 40.48 6175.04 2301.77 10660.22 00:20:54.065 ======================================================== 00:20:54.065 Total : 41781.40 163.21 6127.91 1972.49 10710.35 00:20:54.065 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:54.065 rmmod nvme_tcp 00:20:54.065 rmmod nvme_fabrics 00:20:54.065 rmmod nvme_keyring 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2044707 ']' 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2044707 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2044707 ']' 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2044707 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2044707 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2044707' 00:20:54.065 killing process with pid 2044707 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2044707 00:20:54.065 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2044707 00:20:54.324 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:54.324 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:54.324 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:54.324 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:54.324 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:54.324 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:54.324 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:54.324 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:54.324 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:54.324 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.324 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.324 15:53:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.229 15:53:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:56.229 15:53:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:56.229 15:53:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:56.229 15:53:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:57.606 15:53:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:00.140 15:53:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:05.414 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:05.414 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:05.414 Found net devices under 0000:af:00.0: cvl_0_0 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:05.414 Found net devices under 0000:af:00.1: cvl_0_1 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.414 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:05.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.743 ms 00:21:05.415 00:21:05.415 --- 10.0.0.2 ping statistics --- 00:21:05.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.415 rtt min/avg/max/mdev = 0.743/0.743/0.743/0.000 ms 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:21:05.415 00:21:05.415 --- 10.0.0.1 ping statistics --- 00:21:05.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.415 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:05.415 net.core.busy_poll = 1 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:05.415 net.core.busy_read = 1 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:05.415 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:05.675 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:05.675 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:05.675 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:05.675 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:05.675 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.675 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2048811 00:21:05.675 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2048811 00:21:05.675 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:05.675 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2048811 ']' 00:21:05.675 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.675 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.675 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.675 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.675 15:54:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.675 [2024-12-09 15:54:00.749539] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:21:05.675 [2024-12-09 15:54:00.749588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.675 [2024-12-09 15:54:00.826656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:05.675 [2024-12-09 15:54:00.868774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.675 [2024-12-09 15:54:00.868810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.675 [2024-12-09 15:54:00.868817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.675 [2024-12-09 15:54:00.868823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.675 [2024-12-09 15:54:00.868828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.675 [2024-12-09 15:54:00.870197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.675 [2024-12-09 15:54:00.870286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.675 [2024-12-09 15:54:00.870330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.675 [2024-12-09 15:54:00.870331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.610 [2024-12-09 15:54:01.757385] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.610 Malloc1 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.610 [2024-12-09 15:54:01.820862] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2049140 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:06.610 15:54:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:09.143 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:09.143 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.143 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:09.143 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.143 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:09.143 "tick_rate": 2100000000, 00:21:09.143 "poll_groups": [ 00:21:09.143 { 00:21:09.143 "name": "nvmf_tgt_poll_group_000", 00:21:09.143 "admin_qpairs": 1, 00:21:09.143 "io_qpairs": 2, 00:21:09.143 "current_admin_qpairs": 1, 00:21:09.143 "current_io_qpairs": 2, 00:21:09.143 "pending_bdev_io": 0, 00:21:09.143 "completed_nvme_io": 29426, 00:21:09.143 "transports": [ 00:21:09.143 { 00:21:09.143 "trtype": "TCP" 00:21:09.143 } 00:21:09.143 ] 00:21:09.143 }, 00:21:09.143 { 00:21:09.143 "name": "nvmf_tgt_poll_group_001", 00:21:09.143 "admin_qpairs": 0, 00:21:09.143 "io_qpairs": 2, 00:21:09.143 "current_admin_qpairs": 0, 00:21:09.143 "current_io_qpairs": 2, 00:21:09.143 "pending_bdev_io": 0, 00:21:09.143 "completed_nvme_io": 28686, 00:21:09.143 "transports": [ 00:21:09.143 { 00:21:09.143 "trtype": "TCP" 00:21:09.143 } 00:21:09.143 ] 00:21:09.143 }, 00:21:09.143 { 00:21:09.143 "name": "nvmf_tgt_poll_group_002", 00:21:09.143 "admin_qpairs": 0, 00:21:09.143 "io_qpairs": 0, 00:21:09.143 "current_admin_qpairs": 0, 00:21:09.143 "current_io_qpairs": 0, 00:21:09.143 "pending_bdev_io": 0, 00:21:09.143 "completed_nvme_io": 0, 00:21:09.143 "transports": [ 00:21:09.143 { 00:21:09.143 "trtype": "TCP" 00:21:09.143 } 00:21:09.143 ] 00:21:09.143 }, 00:21:09.143 { 00:21:09.143 "name": "nvmf_tgt_poll_group_003", 00:21:09.143 "admin_qpairs": 0, 00:21:09.143 "io_qpairs": 0, 00:21:09.143 "current_admin_qpairs": 0, 00:21:09.143 "current_io_qpairs": 0, 00:21:09.143 "pending_bdev_io": 0, 00:21:09.143 "completed_nvme_io": 0, 00:21:09.143 "transports": [ 00:21:09.143 { 00:21:09.143 "trtype": "TCP" 00:21:09.143 } 00:21:09.143 ] 00:21:09.143 } 00:21:09.143 ] 00:21:09.143 }' 00:21:09.143 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:09.143 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:09.143 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:09.143 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:09.143 15:54:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2049140 00:21:17.267 Initializing NVMe Controllers 00:21:17.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:17.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:17.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:17.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:17.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:17.267 Initialization complete. Launching workers. 00:21:17.267 ======================================================== 00:21:17.267 Latency(us) 00:21:17.267 Device Information : IOPS MiB/s Average min max 00:21:17.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8108.20 31.67 7918.45 948.56 52318.63 00:21:17.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7545.70 29.48 8481.97 1469.21 52616.00 00:21:17.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7362.80 28.76 8699.63 1445.76 52311.00 00:21:17.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7130.60 27.85 9003.98 1369.91 54926.00 00:21:17.267 ======================================================== 00:21:17.267 Total : 30147.29 117.76 8507.04 948.56 54926.00 00:21:17.267 00:21:17.267 15:54:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:17.267 15:54:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:17.267 15:54:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:17.267 15:54:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:17.267 15:54:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:17.267 15:54:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:17.267 15:54:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:17.267 rmmod nvme_tcp 00:21:17.267 rmmod nvme_fabrics 00:21:17.267 rmmod nvme_keyring 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2048811 ']' 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2048811 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2048811 ']' 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2048811 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2048811 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2048811' 00:21:17.267 killing process with pid 2048811 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2048811 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2048811 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.267 15:54:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.172 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:19.172 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:19.172 00:21:19.172 real 0m50.779s 00:21:19.172 user 2m46.725s 00:21:19.172 sys 0m10.166s 00:21:19.172 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:19.172 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.172 ************************************ 00:21:19.172 END TEST nvmf_perf_adq 00:21:19.172 ************************************ 00:21:19.431 15:54:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:19.431 15:54:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:19.431 15:54:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.431 15:54:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:19.431 ************************************ 00:21:19.431 START TEST nvmf_shutdown 00:21:19.431 ************************************ 00:21:19.431 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:19.431 * Looking for test storage... 00:21:19.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:19.431 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:19.431 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:21:19.431 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:19.431 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:19.431 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:19.431 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.431 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.431 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.431 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:19.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.432 --rc genhtml_branch_coverage=1 00:21:19.432 --rc genhtml_function_coverage=1 00:21:19.432 --rc genhtml_legend=1 00:21:19.432 --rc geninfo_all_blocks=1 00:21:19.432 --rc geninfo_unexecuted_blocks=1 00:21:19.432 00:21:19.432 ' 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:19.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.432 --rc genhtml_branch_coverage=1 00:21:19.432 --rc genhtml_function_coverage=1 00:21:19.432 --rc genhtml_legend=1 00:21:19.432 --rc geninfo_all_blocks=1 00:21:19.432 --rc geninfo_unexecuted_blocks=1 00:21:19.432 00:21:19.432 ' 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:19.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.432 --rc genhtml_branch_coverage=1 00:21:19.432 --rc genhtml_function_coverage=1 00:21:19.432 --rc genhtml_legend=1 00:21:19.432 --rc geninfo_all_blocks=1 00:21:19.432 --rc geninfo_unexecuted_blocks=1 00:21:19.432 00:21:19.432 ' 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:19.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.432 --rc genhtml_branch_coverage=1 00:21:19.432 --rc genhtml_function_coverage=1 00:21:19.432 --rc genhtml_legend=1 00:21:19.432 --rc geninfo_all_blocks=1 00:21:19.432 --rc geninfo_unexecuted_blocks=1 00:21:19.432 00:21:19.432 ' 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.432 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.691 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:19.691 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:19.691 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.691 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.691 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:19.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:19.692 ************************************ 00:21:19.692 START TEST nvmf_shutdown_tc1 00:21:19.692 ************************************ 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:19.692 15:54:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:26.260 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:26.260 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.260 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:26.261 Found net devices under 0000:af:00.0: cvl_0_0 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:26.261 Found net devices under 0000:af:00.1: cvl_0_1 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:26.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:21:26.261 00:21:26.261 --- 10.0.0.2 ping statistics --- 00:21:26.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.261 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:26.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:21:26.261 00:21:26.261 --- 10.0.0.1 ping statistics --- 00:21:26.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.261 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2054690 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2054690 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2054690 ']' 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.261 15:54:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.261 [2024-12-09 15:54:20.799798] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:21:26.261 [2024-12-09 15:54:20.799841] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.261 [2024-12-09 15:54:20.879971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:26.261 [2024-12-09 15:54:20.921198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.261 [2024-12-09 15:54:20.921239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.261 [2024-12-09 15:54:20.921247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.261 [2024-12-09 15:54:20.921252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.261 [2024-12-09 15:54:20.921257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.261 [2024-12-09 15:54:20.922800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.261 [2024-12-09 15:54:20.922908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.261 [2024-12-09 15:54:20.923014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.261 [2024-12-09 15:54:20.923015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:26.261 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.261 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:26.261 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:26.261 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:26.261 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.261 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.261 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:26.261 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.261 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.261 [2024-12-09 15:54:21.067418] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.261 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.261 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.262 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.262 Malloc1 00:21:26.262 [2024-12-09 15:54:21.192541] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.262 Malloc2 00:21:26.262 Malloc3 00:21:26.262 Malloc4 00:21:26.262 Malloc5 00:21:26.262 Malloc6 00:21:26.262 Malloc7 00:21:26.262 Malloc8 00:21:26.521 Malloc9 00:21:26.521 Malloc10 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2054962 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2054962 /var/tmp/bdevperf.sock 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2054962 ']' 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.521 { 00:21:26.521 "params": { 00:21:26.521 "name": "Nvme$subsystem", 00:21:26.521 "trtype": "$TEST_TRANSPORT", 00:21:26.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.521 "adrfam": "ipv4", 00:21:26.521 "trsvcid": "$NVMF_PORT", 00:21:26.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.521 "hdgst": ${hdgst:-false}, 00:21:26.521 "ddgst": ${ddgst:-false} 00:21:26.521 }, 00:21:26.521 "method": "bdev_nvme_attach_controller" 00:21:26.521 } 00:21:26.521 EOF 00:21:26.521 )") 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.521 { 00:21:26.521 "params": { 00:21:26.521 "name": "Nvme$subsystem", 00:21:26.521 "trtype": "$TEST_TRANSPORT", 00:21:26.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.521 "adrfam": "ipv4", 00:21:26.521 "trsvcid": "$NVMF_PORT", 00:21:26.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.521 "hdgst": ${hdgst:-false}, 00:21:26.521 "ddgst": ${ddgst:-false} 00:21:26.521 }, 00:21:26.521 "method": "bdev_nvme_attach_controller" 00:21:26.521 } 00:21:26.521 EOF 00:21:26.521 )") 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.521 { 00:21:26.521 "params": { 00:21:26.521 "name": "Nvme$subsystem", 00:21:26.521 "trtype": "$TEST_TRANSPORT", 00:21:26.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.521 "adrfam": "ipv4", 00:21:26.521 "trsvcid": "$NVMF_PORT", 00:21:26.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.521 "hdgst": ${hdgst:-false}, 00:21:26.521 "ddgst": ${ddgst:-false} 00:21:26.521 }, 00:21:26.521 "method": "bdev_nvme_attach_controller" 00:21:26.521 } 00:21:26.521 EOF 00:21:26.521 )") 00:21:26.521 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.522 { 00:21:26.522 "params": { 00:21:26.522 "name": "Nvme$subsystem", 00:21:26.522 "trtype": "$TEST_TRANSPORT", 00:21:26.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.522 "adrfam": "ipv4", 00:21:26.522 "trsvcid": "$NVMF_PORT", 00:21:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.522 "hdgst": ${hdgst:-false}, 00:21:26.522 "ddgst": ${ddgst:-false} 00:21:26.522 }, 00:21:26.522 "method": "bdev_nvme_attach_controller" 00:21:26.522 } 00:21:26.522 EOF 00:21:26.522 )") 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.522 { 00:21:26.522 "params": { 00:21:26.522 "name": "Nvme$subsystem", 00:21:26.522 "trtype": "$TEST_TRANSPORT", 00:21:26.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.522 "adrfam": "ipv4", 00:21:26.522 "trsvcid": "$NVMF_PORT", 00:21:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.522 "hdgst": ${hdgst:-false}, 00:21:26.522 "ddgst": ${ddgst:-false} 00:21:26.522 }, 00:21:26.522 "method": "bdev_nvme_attach_controller" 00:21:26.522 } 00:21:26.522 EOF 00:21:26.522 )") 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.522 { 00:21:26.522 "params": { 00:21:26.522 "name": "Nvme$subsystem", 00:21:26.522 "trtype": "$TEST_TRANSPORT", 00:21:26.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.522 "adrfam": "ipv4", 00:21:26.522 "trsvcid": "$NVMF_PORT", 00:21:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.522 "hdgst": ${hdgst:-false}, 00:21:26.522 "ddgst": ${ddgst:-false} 00:21:26.522 }, 00:21:26.522 "method": "bdev_nvme_attach_controller" 00:21:26.522 } 00:21:26.522 EOF 00:21:26.522 )") 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.522 [2024-12-09 15:54:21.668006] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:21:26.522 [2024-12-09 15:54:21.668056] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.522 { 00:21:26.522 "params": { 00:21:26.522 "name": "Nvme$subsystem", 00:21:26.522 "trtype": "$TEST_TRANSPORT", 00:21:26.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.522 "adrfam": "ipv4", 00:21:26.522 "trsvcid": "$NVMF_PORT", 00:21:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.522 "hdgst": ${hdgst:-false}, 00:21:26.522 "ddgst": ${ddgst:-false} 00:21:26.522 }, 00:21:26.522 "method": "bdev_nvme_attach_controller" 00:21:26.522 } 00:21:26.522 EOF 00:21:26.522 )") 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.522 { 00:21:26.522 "params": { 00:21:26.522 "name": "Nvme$subsystem", 00:21:26.522 "trtype": "$TEST_TRANSPORT", 00:21:26.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.522 "adrfam": "ipv4", 00:21:26.522 "trsvcid": "$NVMF_PORT", 00:21:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.522 "hdgst": ${hdgst:-false}, 00:21:26.522 "ddgst": ${ddgst:-false} 00:21:26.522 }, 00:21:26.522 "method": "bdev_nvme_attach_controller" 00:21:26.522 } 00:21:26.522 EOF 00:21:26.522 )") 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.522 { 00:21:26.522 "params": { 00:21:26.522 "name": "Nvme$subsystem", 00:21:26.522 "trtype": "$TEST_TRANSPORT", 00:21:26.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.522 "adrfam": "ipv4", 00:21:26.522 "trsvcid": "$NVMF_PORT", 00:21:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.522 "hdgst": ${hdgst:-false}, 00:21:26.522 "ddgst": ${ddgst:-false} 00:21:26.522 }, 00:21:26.522 "method": "bdev_nvme_attach_controller" 00:21:26.522 } 00:21:26.522 EOF 00:21:26.522 )") 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.522 { 00:21:26.522 "params": { 00:21:26.522 "name": "Nvme$subsystem", 00:21:26.522 "trtype": "$TEST_TRANSPORT", 00:21:26.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.522 "adrfam": "ipv4", 00:21:26.522 "trsvcid": "$NVMF_PORT", 00:21:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.522 "hdgst": ${hdgst:-false}, 00:21:26.522 "ddgst": ${ddgst:-false} 00:21:26.522 }, 00:21:26.522 "method": "bdev_nvme_attach_controller" 00:21:26.522 } 00:21:26.522 EOF 00:21:26.522 )") 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:26.522 15:54:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:26.522 "params": { 00:21:26.522 "name": "Nvme1", 00:21:26.522 "trtype": "tcp", 00:21:26.522 "traddr": "10.0.0.2", 00:21:26.522 "adrfam": "ipv4", 00:21:26.522 "trsvcid": "4420", 00:21:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:26.522 "hdgst": false, 00:21:26.522 "ddgst": false 00:21:26.522 }, 00:21:26.522 "method": "bdev_nvme_attach_controller" 00:21:26.522 },{ 00:21:26.522 "params": { 00:21:26.522 "name": "Nvme2", 00:21:26.522 "trtype": "tcp", 00:21:26.522 "traddr": "10.0.0.2", 00:21:26.522 "adrfam": "ipv4", 00:21:26.522 "trsvcid": "4420", 00:21:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:26.522 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:26.522 "hdgst": false, 00:21:26.522 "ddgst": false 00:21:26.522 }, 00:21:26.522 "method": "bdev_nvme_attach_controller" 00:21:26.522 },{ 00:21:26.522 "params": { 00:21:26.522 "name": "Nvme3", 00:21:26.522 "trtype": "tcp", 00:21:26.522 "traddr": "10.0.0.2", 00:21:26.522 "adrfam": "ipv4", 00:21:26.522 "trsvcid": "4420", 00:21:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:26.522 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:26.522 "hdgst": false, 00:21:26.522 "ddgst": false 00:21:26.522 }, 00:21:26.522 "method": "bdev_nvme_attach_controller" 00:21:26.522 },{ 00:21:26.522 "params": { 00:21:26.522 "name": "Nvme4", 00:21:26.522 "trtype": "tcp", 00:21:26.522 "traddr": "10.0.0.2", 00:21:26.522 "adrfam": "ipv4", 00:21:26.522 "trsvcid": "4420", 00:21:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:26.522 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:26.522 "hdgst": false, 00:21:26.522 "ddgst": false 00:21:26.522 }, 00:21:26.522 "method": "bdev_nvme_attach_controller" 00:21:26.522 },{ 00:21:26.522 "params": { 00:21:26.522 "name": "Nvme5", 00:21:26.522 "trtype": "tcp", 00:21:26.522 "traddr": "10.0.0.2", 00:21:26.522 "adrfam": "ipv4", 00:21:26.522 "trsvcid": "4420", 00:21:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:26.522 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:26.522 "hdgst": false, 00:21:26.522 "ddgst": false 00:21:26.522 }, 00:21:26.522 "method": "bdev_nvme_attach_controller" 00:21:26.522 },{ 00:21:26.522 "params": { 00:21:26.522 "name": "Nvme6", 00:21:26.522 "trtype": "tcp", 00:21:26.522 "traddr": "10.0.0.2", 00:21:26.522 "adrfam": "ipv4", 00:21:26.522 "trsvcid": "4420", 00:21:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:26.522 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:26.522 "hdgst": false, 00:21:26.522 "ddgst": false 00:21:26.522 }, 00:21:26.522 "method": "bdev_nvme_attach_controller" 00:21:26.522 },{ 00:21:26.522 "params": { 00:21:26.522 "name": "Nvme7", 00:21:26.522 "trtype": "tcp", 00:21:26.522 "traddr": "10.0.0.2", 00:21:26.522 "adrfam": "ipv4", 00:21:26.522 "trsvcid": "4420", 00:21:26.522 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:26.522 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:26.522 "hdgst": false, 00:21:26.522 "ddgst": false 00:21:26.522 }, 00:21:26.522 "method": "bdev_nvme_attach_controller" 00:21:26.522 },{ 00:21:26.522 "params": { 00:21:26.523 "name": "Nvme8", 00:21:26.523 "trtype": "tcp", 00:21:26.523 "traddr": "10.0.0.2", 00:21:26.523 "adrfam": "ipv4", 00:21:26.523 "trsvcid": "4420", 00:21:26.523 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:26.523 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:26.523 "hdgst": false, 00:21:26.523 "ddgst": false 00:21:26.523 }, 00:21:26.523 "method": "bdev_nvme_attach_controller" 00:21:26.523 },{ 00:21:26.523 "params": { 00:21:26.523 "name": "Nvme9", 00:21:26.523 "trtype": "tcp", 00:21:26.523 "traddr": "10.0.0.2", 00:21:26.523 "adrfam": "ipv4", 00:21:26.523 "trsvcid": "4420", 00:21:26.523 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:26.523 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:26.523 "hdgst": false, 00:21:26.523 "ddgst": false 00:21:26.523 }, 00:21:26.523 "method": "bdev_nvme_attach_controller" 00:21:26.523 },{ 00:21:26.523 "params": { 00:21:26.523 "name": "Nvme10", 00:21:26.523 "trtype": "tcp", 00:21:26.523 "traddr": "10.0.0.2", 00:21:26.523 "adrfam": "ipv4", 00:21:26.523 "trsvcid": "4420", 00:21:26.523 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:26.523 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:26.523 "hdgst": false, 00:21:26.523 "ddgst": false 00:21:26.523 }, 00:21:26.523 "method": "bdev_nvme_attach_controller" 00:21:26.523 }' 00:21:26.523 [2024-12-09 15:54:21.743472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.781 [2024-12-09 15:54:21.783406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.685 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.685 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:28.685 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:28.685 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.685 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.685 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.685 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2054962 00:21:28.685 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:28.685 15:54:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:29.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2054962 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:29.622 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2054690 00:21:29.622 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:29.622 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:29.622 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:29.622 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:29.622 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.622 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.622 { 00:21:29.622 "params": { 00:21:29.622 "name": "Nvme$subsystem", 00:21:29.622 "trtype": "$TEST_TRANSPORT", 00:21:29.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.622 "adrfam": "ipv4", 00:21:29.622 "trsvcid": "$NVMF_PORT", 00:21:29.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.622 "hdgst": ${hdgst:-false}, 00:21:29.622 "ddgst": ${ddgst:-false} 00:21:29.622 }, 00:21:29.622 "method": "bdev_nvme_attach_controller" 00:21:29.622 } 00:21:29.622 EOF 00:21:29.622 )") 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.623 { 00:21:29.623 "params": { 00:21:29.623 "name": "Nvme$subsystem", 00:21:29.623 "trtype": "$TEST_TRANSPORT", 00:21:29.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.623 "adrfam": "ipv4", 00:21:29.623 "trsvcid": "$NVMF_PORT", 00:21:29.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.623 "hdgst": ${hdgst:-false}, 00:21:29.623 "ddgst": ${ddgst:-false} 00:21:29.623 }, 00:21:29.623 "method": "bdev_nvme_attach_controller" 00:21:29.623 } 00:21:29.623 EOF 00:21:29.623 )") 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.623 { 00:21:29.623 "params": { 00:21:29.623 "name": "Nvme$subsystem", 00:21:29.623 "trtype": "$TEST_TRANSPORT", 00:21:29.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.623 "adrfam": "ipv4", 00:21:29.623 "trsvcid": "$NVMF_PORT", 00:21:29.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.623 "hdgst": ${hdgst:-false}, 00:21:29.623 "ddgst": ${ddgst:-false} 00:21:29.623 }, 00:21:29.623 "method": "bdev_nvme_attach_controller" 00:21:29.623 } 00:21:29.623 EOF 00:21:29.623 )") 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.623 { 00:21:29.623 "params": { 00:21:29.623 "name": "Nvme$subsystem", 00:21:29.623 "trtype": "$TEST_TRANSPORT", 00:21:29.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.623 "adrfam": "ipv4", 00:21:29.623 "trsvcid": "$NVMF_PORT", 00:21:29.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.623 "hdgst": ${hdgst:-false}, 00:21:29.623 "ddgst": ${ddgst:-false} 00:21:29.623 }, 00:21:29.623 "method": "bdev_nvme_attach_controller" 00:21:29.623 } 00:21:29.623 EOF 00:21:29.623 )") 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.623 { 00:21:29.623 "params": { 00:21:29.623 "name": "Nvme$subsystem", 00:21:29.623 "trtype": "$TEST_TRANSPORT", 00:21:29.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.623 "adrfam": "ipv4", 00:21:29.623 "trsvcid": "$NVMF_PORT", 00:21:29.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.623 "hdgst": ${hdgst:-false}, 00:21:29.623 "ddgst": ${ddgst:-false} 00:21:29.623 }, 00:21:29.623 "method": "bdev_nvme_attach_controller" 00:21:29.623 } 00:21:29.623 EOF 00:21:29.623 )") 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.623 { 00:21:29.623 "params": { 00:21:29.623 "name": "Nvme$subsystem", 00:21:29.623 "trtype": "$TEST_TRANSPORT", 00:21:29.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.623 "adrfam": "ipv4", 00:21:29.623 "trsvcid": "$NVMF_PORT", 00:21:29.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.623 "hdgst": ${hdgst:-false}, 00:21:29.623 "ddgst": ${ddgst:-false} 00:21:29.623 }, 00:21:29.623 "method": "bdev_nvme_attach_controller" 00:21:29.623 } 00:21:29.623 EOF 00:21:29.623 )") 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.623 { 00:21:29.623 "params": { 00:21:29.623 "name": "Nvme$subsystem", 00:21:29.623 "trtype": "$TEST_TRANSPORT", 00:21:29.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.623 "adrfam": "ipv4", 00:21:29.623 "trsvcid": "$NVMF_PORT", 00:21:29.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.623 "hdgst": ${hdgst:-false}, 00:21:29.623 "ddgst": ${ddgst:-false} 00:21:29.623 }, 00:21:29.623 "method": "bdev_nvme_attach_controller" 00:21:29.623 } 00:21:29.623 EOF 00:21:29.623 )") 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.623 [2024-12-09 15:54:24.592982] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:21:29.623 [2024-12-09 15:54:24.593030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2055448 ] 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.623 { 00:21:29.623 "params": { 00:21:29.623 "name": "Nvme$subsystem", 00:21:29.623 "trtype": "$TEST_TRANSPORT", 00:21:29.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.623 "adrfam": "ipv4", 00:21:29.623 "trsvcid": "$NVMF_PORT", 00:21:29.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.623 "hdgst": ${hdgst:-false}, 00:21:29.623 "ddgst": ${ddgst:-false} 00:21:29.623 }, 00:21:29.623 "method": "bdev_nvme_attach_controller" 00:21:29.623 } 00:21:29.623 EOF 00:21:29.623 )") 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.623 { 00:21:29.623 "params": { 00:21:29.623 "name": "Nvme$subsystem", 00:21:29.623 "trtype": "$TEST_TRANSPORT", 00:21:29.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.623 "adrfam": "ipv4", 00:21:29.623 "trsvcid": "$NVMF_PORT", 00:21:29.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.623 "hdgst": ${hdgst:-false}, 00:21:29.623 "ddgst": ${ddgst:-false} 00:21:29.623 }, 00:21:29.623 "method": "bdev_nvme_attach_controller" 00:21:29.623 } 00:21:29.623 EOF 00:21:29.623 )") 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.623 { 00:21:29.623 "params": { 00:21:29.623 "name": "Nvme$subsystem", 00:21:29.623 "trtype": "$TEST_TRANSPORT", 00:21:29.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.623 "adrfam": "ipv4", 00:21:29.623 "trsvcid": "$NVMF_PORT", 00:21:29.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.623 "hdgst": ${hdgst:-false}, 00:21:29.623 "ddgst": ${ddgst:-false} 00:21:29.623 }, 00:21:29.623 "method": "bdev_nvme_attach_controller" 00:21:29.623 } 00:21:29.623 EOF 00:21:29.623 )") 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:29.623 15:54:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:29.623 "params": { 00:21:29.623 "name": "Nvme1", 00:21:29.623 "trtype": "tcp", 00:21:29.623 "traddr": "10.0.0.2", 00:21:29.623 "adrfam": "ipv4", 00:21:29.623 "trsvcid": "4420", 00:21:29.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.623 "hdgst": false, 00:21:29.623 "ddgst": false 00:21:29.623 }, 00:21:29.623 "method": "bdev_nvme_attach_controller" 00:21:29.623 },{ 00:21:29.623 "params": { 00:21:29.623 "name": "Nvme2", 00:21:29.623 "trtype": "tcp", 00:21:29.623 "traddr": "10.0.0.2", 00:21:29.623 "adrfam": "ipv4", 00:21:29.623 "trsvcid": "4420", 00:21:29.623 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:29.623 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:29.623 "hdgst": false, 00:21:29.623 "ddgst": false 00:21:29.623 }, 00:21:29.623 "method": "bdev_nvme_attach_controller" 00:21:29.623 },{ 00:21:29.623 "params": { 00:21:29.623 "name": "Nvme3", 00:21:29.623 "trtype": "tcp", 00:21:29.623 "traddr": "10.0.0.2", 00:21:29.623 "adrfam": "ipv4", 00:21:29.623 "trsvcid": "4420", 00:21:29.623 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:29.623 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:29.623 "hdgst": false, 00:21:29.623 "ddgst": false 00:21:29.624 }, 00:21:29.624 "method": "bdev_nvme_attach_controller" 00:21:29.624 },{ 00:21:29.624 "params": { 00:21:29.624 "name": "Nvme4", 00:21:29.624 "trtype": "tcp", 00:21:29.624 "traddr": "10.0.0.2", 00:21:29.624 "adrfam": "ipv4", 00:21:29.624 "trsvcid": "4420", 00:21:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:29.624 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:29.624 "hdgst": false, 00:21:29.624 "ddgst": false 00:21:29.624 }, 00:21:29.624 "method": "bdev_nvme_attach_controller" 00:21:29.624 },{ 00:21:29.624 "params": { 00:21:29.624 "name": "Nvme5", 00:21:29.624 "trtype": "tcp", 00:21:29.624 "traddr": "10.0.0.2", 00:21:29.624 "adrfam": "ipv4", 00:21:29.624 "trsvcid": "4420", 00:21:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:29.624 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:29.624 "hdgst": false, 00:21:29.624 "ddgst": false 00:21:29.624 }, 00:21:29.624 "method": "bdev_nvme_attach_controller" 00:21:29.624 },{ 00:21:29.624 "params": { 00:21:29.624 "name": "Nvme6", 00:21:29.624 "trtype": "tcp", 00:21:29.624 "traddr": "10.0.0.2", 00:21:29.624 "adrfam": "ipv4", 00:21:29.624 "trsvcid": "4420", 00:21:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:29.624 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:29.624 "hdgst": false, 00:21:29.624 "ddgst": false 00:21:29.624 }, 00:21:29.624 "method": "bdev_nvme_attach_controller" 00:21:29.624 },{ 00:21:29.624 "params": { 00:21:29.624 "name": "Nvme7", 00:21:29.624 "trtype": "tcp", 00:21:29.624 "traddr": "10.0.0.2", 00:21:29.624 "adrfam": "ipv4", 00:21:29.624 "trsvcid": "4420", 00:21:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:29.624 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:29.624 "hdgst": false, 00:21:29.624 "ddgst": false 00:21:29.624 }, 00:21:29.624 "method": "bdev_nvme_attach_controller" 00:21:29.624 },{ 00:21:29.624 "params": { 00:21:29.624 "name": "Nvme8", 00:21:29.624 "trtype": "tcp", 00:21:29.624 "traddr": "10.0.0.2", 00:21:29.624 "adrfam": "ipv4", 00:21:29.624 "trsvcid": "4420", 00:21:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:29.624 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:29.624 "hdgst": false, 00:21:29.624 "ddgst": false 00:21:29.624 }, 00:21:29.624 "method": "bdev_nvme_attach_controller" 00:21:29.624 },{ 00:21:29.624 "params": { 00:21:29.624 "name": "Nvme9", 00:21:29.624 "trtype": "tcp", 00:21:29.624 "traddr": "10.0.0.2", 00:21:29.624 "adrfam": "ipv4", 00:21:29.624 "trsvcid": "4420", 00:21:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:29.624 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:29.624 "hdgst": false, 00:21:29.624 "ddgst": false 00:21:29.624 }, 00:21:29.624 "method": "bdev_nvme_attach_controller" 00:21:29.624 },{ 00:21:29.624 "params": { 00:21:29.624 "name": "Nvme10", 00:21:29.624 "trtype": "tcp", 00:21:29.624 "traddr": "10.0.0.2", 00:21:29.624 "adrfam": "ipv4", 00:21:29.624 "trsvcid": "4420", 00:21:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:29.624 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:29.624 "hdgst": false, 00:21:29.624 "ddgst": false 00:21:29.624 }, 00:21:29.624 "method": "bdev_nvme_attach_controller" 00:21:29.624 }' 00:21:29.624 [2024-12-09 15:54:24.670974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.624 [2024-12-09 15:54:24.711096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.999 Running I/O for 1 seconds... 00:21:31.935 2251.00 IOPS, 140.69 MiB/s 00:21:31.935 Latency(us) 00:21:31.935 [2024-12-09T14:54:27.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.935 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.935 Verification LBA range: start 0x0 length 0x400 00:21:31.935 Nvme1n1 : 1.05 243.08 15.19 0.00 0.00 260821.82 19348.72 215707.06 00:21:31.935 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.935 Verification LBA range: start 0x0 length 0x400 00:21:31.935 Nvme2n1 : 1.03 248.97 15.56 0.00 0.00 250688.61 15104.49 227690.79 00:21:31.935 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.935 Verification LBA range: start 0x0 length 0x400 00:21:31.935 Nvme3n1 : 1.10 289.99 18.12 0.00 0.00 212456.20 14293.09 215707.06 00:21:31.935 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.935 Verification LBA range: start 0x0 length 0x400 00:21:31.935 Nvme4n1 : 1.08 300.61 18.79 0.00 0.00 200221.73 22219.82 201726.05 00:21:31.935 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.935 Verification LBA range: start 0x0 length 0x400 00:21:31.935 Nvme5n1 : 1.12 292.81 18.30 0.00 0.00 202987.05 7614.66 211712.49 00:21:31.935 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.935 Verification LBA range: start 0x0 length 0x400 00:21:31.935 Nvme6n1 : 1.13 283.83 17.74 0.00 0.00 208049.49 14917.24 217704.35 00:21:31.935 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.936 Verification LBA range: start 0x0 length 0x400 00:21:31.936 Nvme7n1 : 1.12 286.82 17.93 0.00 0.00 202562.71 14043.43 200727.41 00:21:31.936 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.936 Verification LBA range: start 0x0 length 0x400 00:21:31.936 Nvme8n1 : 1.12 290.51 18.16 0.00 0.00 196205.61 4587.52 216705.71 00:21:31.936 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.936 Verification LBA range: start 0x0 length 0x400 00:21:31.936 Nvme9n1 : 1.15 287.92 18.00 0.00 0.00 193380.45 1880.26 216705.71 00:21:31.936 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.936 Verification LBA range: start 0x0 length 0x400 00:21:31.936 Nvme10n1 : 1.16 329.94 20.62 0.00 0.00 169149.07 4462.69 226692.14 00:21:31.936 [2024-12-09T14:54:27.164Z] =================================================================================================================== 00:21:31.936 [2024-12-09T14:54:27.164Z] Total : 2854.47 178.40 0.00 0.00 206862.52 1880.26 227690.79 00:21:32.194 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:32.194 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:32.194 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:32.194 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:32.194 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:32.194 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:32.194 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:32.194 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:32.194 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:32.194 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:32.194 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:32.194 rmmod nvme_tcp 00:21:32.194 rmmod nvme_fabrics 00:21:32.194 rmmod nvme_keyring 00:21:32.194 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:32.194 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:32.194 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:32.194 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2054690 ']' 00:21:32.194 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2054690 00:21:32.195 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2054690 ']' 00:21:32.195 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2054690 00:21:32.195 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:32.195 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:32.195 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2054690 00:21:32.195 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:32.195 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:32.195 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2054690' 00:21:32.195 killing process with pid 2054690 00:21:32.195 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2054690 00:21:32.195 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2054690 00:21:32.763 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:32.763 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:32.763 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:32.763 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:32.763 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:32.763 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:32.763 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:32.763 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:32.763 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:32.763 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.763 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.763 15:54:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.665 00:21:34.665 real 0m15.084s 00:21:34.665 user 0m32.898s 00:21:34.665 sys 0m5.775s 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:34.665 ************************************ 00:21:34.665 END TEST nvmf_shutdown_tc1 00:21:34.665 ************************************ 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:34.665 ************************************ 00:21:34.665 START TEST nvmf_shutdown_tc2 00:21:34.665 ************************************ 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:34.665 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:34.925 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:34.925 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:34.925 Found net devices under 0000:af:00.0: cvl_0_0 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.925 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:34.926 Found net devices under 0000:af:00.1: cvl_0_1 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:34.926 15:54:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:34.926 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:34.926 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:35.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:21:35.185 00:21:35.185 --- 10.0.0.2 ping statistics --- 00:21:35.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.185 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:21:35.185 00:21:35.185 --- 10.0.0.1 ping statistics --- 00:21:35.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.185 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2056463 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2056463 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2056463 ']' 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.185 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.185 [2024-12-09 15:54:30.365062] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:21:35.185 [2024-12-09 15:54:30.365106] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.445 [2024-12-09 15:54:30.422876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.445 [2024-12-09 15:54:30.461769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.445 [2024-12-09 15:54:30.461804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.445 [2024-12-09 15:54:30.461812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.445 [2024-12-09 15:54:30.461819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.445 [2024-12-09 15:54:30.461828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.445 [2024-12-09 15:54:30.463256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.445 [2024-12-09 15:54:30.463361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.445 [2024-12-09 15:54:30.463469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.445 [2024-12-09 15:54:30.463470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.445 [2024-12-09 15:54:30.607703] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.445 15:54:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.704 Malloc1 00:21:35.704 [2024-12-09 15:54:30.720964] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.704 Malloc2 00:21:35.704 Malloc3 00:21:35.704 Malloc4 00:21:35.704 Malloc5 00:21:35.704 Malloc6 00:21:35.963 Malloc7 00:21:35.963 Malloc8 00:21:35.963 Malloc9 00:21:35.963 Malloc10 00:21:35.963 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.963 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:35.963 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.963 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.963 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2056731 00:21:35.963 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2056731 /var/tmp/bdevperf.sock 00:21:35.963 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2056731 ']' 00:21:35.963 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.963 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:35.963 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:35.963 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.963 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.964 { 00:21:35.964 "params": { 00:21:35.964 "name": "Nvme$subsystem", 00:21:35.964 "trtype": "$TEST_TRANSPORT", 00:21:35.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.964 "adrfam": "ipv4", 00:21:35.964 "trsvcid": "$NVMF_PORT", 00:21:35.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.964 "hdgst": ${hdgst:-false}, 00:21:35.964 "ddgst": ${ddgst:-false} 00:21:35.964 }, 00:21:35.964 "method": "bdev_nvme_attach_controller" 00:21:35.964 } 00:21:35.964 EOF 00:21:35.964 )") 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.964 { 00:21:35.964 "params": { 00:21:35.964 "name": "Nvme$subsystem", 00:21:35.964 "trtype": "$TEST_TRANSPORT", 00:21:35.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.964 "adrfam": "ipv4", 00:21:35.964 "trsvcid": "$NVMF_PORT", 00:21:35.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.964 "hdgst": ${hdgst:-false}, 00:21:35.964 "ddgst": ${ddgst:-false} 00:21:35.964 }, 00:21:35.964 "method": "bdev_nvme_attach_controller" 00:21:35.964 } 00:21:35.964 EOF 00:21:35.964 )") 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.964 { 00:21:35.964 "params": { 00:21:35.964 "name": "Nvme$subsystem", 00:21:35.964 "trtype": "$TEST_TRANSPORT", 00:21:35.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.964 "adrfam": "ipv4", 00:21:35.964 "trsvcid": "$NVMF_PORT", 00:21:35.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.964 "hdgst": ${hdgst:-false}, 00:21:35.964 "ddgst": ${ddgst:-false} 00:21:35.964 }, 00:21:35.964 "method": "bdev_nvme_attach_controller" 00:21:35.964 } 00:21:35.964 EOF 00:21:35.964 )") 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.964 { 00:21:35.964 "params": { 00:21:35.964 "name": "Nvme$subsystem", 00:21:35.964 "trtype": "$TEST_TRANSPORT", 00:21:35.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.964 "adrfam": "ipv4", 00:21:35.964 "trsvcid": "$NVMF_PORT", 00:21:35.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.964 "hdgst": ${hdgst:-false}, 00:21:35.964 "ddgst": ${ddgst:-false} 00:21:35.964 }, 00:21:35.964 "method": "bdev_nvme_attach_controller" 00:21:35.964 } 00:21:35.964 EOF 00:21:35.964 )") 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.964 { 00:21:35.964 "params": { 00:21:35.964 "name": "Nvme$subsystem", 00:21:35.964 "trtype": "$TEST_TRANSPORT", 00:21:35.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.964 "adrfam": "ipv4", 00:21:35.964 "trsvcid": "$NVMF_PORT", 00:21:35.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.964 "hdgst": ${hdgst:-false}, 00:21:35.964 "ddgst": ${ddgst:-false} 00:21:35.964 }, 00:21:35.964 "method": "bdev_nvme_attach_controller" 00:21:35.964 } 00:21:35.964 EOF 00:21:35.964 )") 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.964 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.964 { 00:21:35.964 "params": { 00:21:35.964 "name": "Nvme$subsystem", 00:21:35.964 "trtype": "$TEST_TRANSPORT", 00:21:35.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.964 "adrfam": "ipv4", 00:21:35.964 "trsvcid": "$NVMF_PORT", 00:21:35.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.964 "hdgst": ${hdgst:-false}, 00:21:35.964 "ddgst": ${ddgst:-false} 00:21:35.964 }, 00:21:35.964 "method": "bdev_nvme_attach_controller" 00:21:35.964 } 00:21:35.964 EOF 00:21:35.964 )") 00:21:36.223 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:36.223 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.223 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.223 { 00:21:36.223 "params": { 00:21:36.223 "name": "Nvme$subsystem", 00:21:36.223 "trtype": "$TEST_TRANSPORT", 00:21:36.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.223 "adrfam": "ipv4", 00:21:36.223 "trsvcid": "$NVMF_PORT", 00:21:36.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.223 "hdgst": ${hdgst:-false}, 00:21:36.223 "ddgst": ${ddgst:-false} 00:21:36.223 }, 00:21:36.223 "method": "bdev_nvme_attach_controller" 00:21:36.223 } 00:21:36.223 EOF 00:21:36.223 )") 00:21:36.223 [2024-12-09 15:54:31.197780] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:21:36.223 [2024-12-09 15:54:31.197830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2056731 ] 00:21:36.223 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:36.223 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.223 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.223 { 00:21:36.223 "params": { 00:21:36.223 "name": "Nvme$subsystem", 00:21:36.223 "trtype": "$TEST_TRANSPORT", 00:21:36.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.223 "adrfam": "ipv4", 00:21:36.223 "trsvcid": "$NVMF_PORT", 00:21:36.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.223 "hdgst": ${hdgst:-false}, 00:21:36.223 "ddgst": ${ddgst:-false} 00:21:36.223 }, 00:21:36.223 "method": "bdev_nvme_attach_controller" 00:21:36.223 } 00:21:36.223 EOF 00:21:36.223 )") 00:21:36.224 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:36.224 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.224 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.224 { 00:21:36.224 "params": { 00:21:36.224 "name": "Nvme$subsystem", 00:21:36.224 "trtype": "$TEST_TRANSPORT", 00:21:36.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.224 "adrfam": "ipv4", 00:21:36.224 "trsvcid": "$NVMF_PORT", 00:21:36.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.224 "hdgst": ${hdgst:-false}, 00:21:36.224 "ddgst": ${ddgst:-false} 00:21:36.224 }, 00:21:36.224 "method": "bdev_nvme_attach_controller" 00:21:36.224 } 00:21:36.224 EOF 00:21:36.224 )") 00:21:36.224 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:36.224 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.224 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.224 { 00:21:36.224 "params": { 00:21:36.224 "name": "Nvme$subsystem", 00:21:36.224 "trtype": "$TEST_TRANSPORT", 00:21:36.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.224 "adrfam": "ipv4", 00:21:36.224 "trsvcid": "$NVMF_PORT", 00:21:36.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.224 "hdgst": ${hdgst:-false}, 00:21:36.224 "ddgst": ${ddgst:-false} 00:21:36.224 }, 00:21:36.224 "method": "bdev_nvme_attach_controller" 00:21:36.224 } 00:21:36.224 EOF 00:21:36.224 )") 00:21:36.224 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:36.224 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:36.224 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:36.224 15:54:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:36.224 "params": { 00:21:36.224 "name": "Nvme1", 00:21:36.224 "trtype": "tcp", 00:21:36.224 "traddr": "10.0.0.2", 00:21:36.224 "adrfam": "ipv4", 00:21:36.224 "trsvcid": "4420", 00:21:36.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.224 "hdgst": false, 00:21:36.224 "ddgst": false 00:21:36.224 }, 00:21:36.224 "method": "bdev_nvme_attach_controller" 00:21:36.224 },{ 00:21:36.224 "params": { 00:21:36.224 "name": "Nvme2", 00:21:36.224 "trtype": "tcp", 00:21:36.224 "traddr": "10.0.0.2", 00:21:36.224 "adrfam": "ipv4", 00:21:36.224 "trsvcid": "4420", 00:21:36.224 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:36.224 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:36.224 "hdgst": false, 00:21:36.224 "ddgst": false 00:21:36.224 }, 00:21:36.224 "method": "bdev_nvme_attach_controller" 00:21:36.224 },{ 00:21:36.224 "params": { 00:21:36.224 "name": "Nvme3", 00:21:36.224 "trtype": "tcp", 00:21:36.224 "traddr": "10.0.0.2", 00:21:36.224 "adrfam": "ipv4", 00:21:36.224 "trsvcid": "4420", 00:21:36.224 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:36.224 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:36.224 "hdgst": false, 00:21:36.224 "ddgst": false 00:21:36.224 }, 00:21:36.224 "method": "bdev_nvme_attach_controller" 00:21:36.224 },{ 00:21:36.224 "params": { 00:21:36.224 "name": "Nvme4", 00:21:36.224 "trtype": "tcp", 00:21:36.224 "traddr": "10.0.0.2", 00:21:36.224 "adrfam": "ipv4", 00:21:36.224 "trsvcid": "4420", 00:21:36.224 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:36.224 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:36.224 "hdgst": false, 00:21:36.224 "ddgst": false 00:21:36.224 }, 00:21:36.224 "method": "bdev_nvme_attach_controller" 00:21:36.224 },{ 00:21:36.224 "params": { 00:21:36.224 "name": "Nvme5", 00:21:36.224 "trtype": "tcp", 00:21:36.224 "traddr": "10.0.0.2", 00:21:36.224 "adrfam": "ipv4", 00:21:36.224 "trsvcid": "4420", 00:21:36.224 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:36.224 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:36.224 "hdgst": false, 00:21:36.224 "ddgst": false 00:21:36.224 }, 00:21:36.224 "method": "bdev_nvme_attach_controller" 00:21:36.224 },{ 00:21:36.224 "params": { 00:21:36.224 "name": "Nvme6", 00:21:36.224 "trtype": "tcp", 00:21:36.224 "traddr": "10.0.0.2", 00:21:36.224 "adrfam": "ipv4", 00:21:36.224 "trsvcid": "4420", 00:21:36.224 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:36.224 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:36.224 "hdgst": false, 00:21:36.224 "ddgst": false 00:21:36.224 }, 00:21:36.224 "method": "bdev_nvme_attach_controller" 00:21:36.224 },{ 00:21:36.224 "params": { 00:21:36.224 "name": "Nvme7", 00:21:36.224 "trtype": "tcp", 00:21:36.224 "traddr": "10.0.0.2", 00:21:36.224 "adrfam": "ipv4", 00:21:36.224 "trsvcid": "4420", 00:21:36.224 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:36.224 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:36.224 "hdgst": false, 00:21:36.224 "ddgst": false 00:21:36.224 }, 00:21:36.224 "method": "bdev_nvme_attach_controller" 00:21:36.224 },{ 00:21:36.224 "params": { 00:21:36.224 "name": "Nvme8", 00:21:36.224 "trtype": "tcp", 00:21:36.224 "traddr": "10.0.0.2", 00:21:36.224 "adrfam": "ipv4", 00:21:36.224 "trsvcid": "4420", 00:21:36.224 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:36.224 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:36.224 "hdgst": false, 00:21:36.224 "ddgst": false 00:21:36.224 }, 00:21:36.224 "method": "bdev_nvme_attach_controller" 00:21:36.224 },{ 00:21:36.224 "params": { 00:21:36.224 "name": "Nvme9", 00:21:36.224 "trtype": "tcp", 00:21:36.224 "traddr": "10.0.0.2", 00:21:36.224 "adrfam": "ipv4", 00:21:36.224 "trsvcid": "4420", 00:21:36.224 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:36.224 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:36.224 "hdgst": false, 00:21:36.224 "ddgst": false 00:21:36.224 }, 00:21:36.224 "method": "bdev_nvme_attach_controller" 00:21:36.224 },{ 00:21:36.224 "params": { 00:21:36.224 "name": "Nvme10", 00:21:36.224 "trtype": "tcp", 00:21:36.224 "traddr": "10.0.0.2", 00:21:36.224 "adrfam": "ipv4", 00:21:36.224 "trsvcid": "4420", 00:21:36.224 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:36.224 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:36.224 "hdgst": false, 00:21:36.224 "ddgst": false 00:21:36.224 }, 00:21:36.224 "method": "bdev_nvme_attach_controller" 00:21:36.224 }' 00:21:36.224 [2024-12-09 15:54:31.271841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.224 [2024-12-09 15:54:31.311283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.601 Running I/O for 10 seconds... 00:21:37.860 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.860 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:37.860 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:37.860 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.860 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2056731 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2056731 ']' 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2056731 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2056731 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2056731' 00:21:38.120 killing process with pid 2056731 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2056731 00:21:38.120 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2056731 00:21:38.120 Received shutdown signal, test time was about 0.639187 seconds 00:21:38.120 00:21:38.120 Latency(us) 00:21:38.120 [2024-12-09T14:54:33.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.120 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.120 Verification LBA range: start 0x0 length 0x400 00:21:38.120 Nvme1n1 : 0.63 306.63 19.16 0.00 0.00 205414.07 29834.48 219701.64 00:21:38.120 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.120 Verification LBA range: start 0x0 length 0x400 00:21:38.120 Nvme2n1 : 0.63 303.80 18.99 0.00 0.00 202087.21 16727.28 204721.98 00:21:38.120 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.120 Verification LBA range: start 0x0 length 0x400 00:21:38.120 Nvme3n1 : 0.62 318.74 19.92 0.00 0.00 185657.21 6147.90 201726.05 00:21:38.120 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.120 Verification LBA range: start 0x0 length 0x400 00:21:38.120 Nvme4n1 : 0.62 317.66 19.85 0.00 0.00 181928.10 3916.56 191739.61 00:21:38.120 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.120 Verification LBA range: start 0x0 length 0x400 00:21:38.120 Nvme5n1 : 0.63 304.89 19.06 0.00 0.00 185815.37 17101.78 216705.71 00:21:38.120 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.120 Verification LBA range: start 0x0 length 0x400 00:21:38.120 Nvme6n1 : 0.60 211.79 13.24 0.00 0.00 258467.35 46686.60 210713.84 00:21:38.120 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.120 Verification LBA range: start 0x0 length 0x400 00:21:38.120 Nvme7n1 : 0.64 300.68 18.79 0.00 0.00 177546.00 16602.45 204721.98 00:21:38.120 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.120 Verification LBA range: start 0x0 length 0x400 00:21:38.120 Nvme8n1 : 0.64 301.35 18.83 0.00 0.00 172953.60 13793.77 217704.35 00:21:38.120 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.120 Verification LBA range: start 0x0 length 0x400 00:21:38.120 Nvme9n1 : 0.61 210.95 13.18 0.00 0.00 236589.84 31706.94 224694.86 00:21:38.120 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.120 Verification LBA range: start 0x0 length 0x400 00:21:38.120 Nvme10n1 : 0.61 208.31 13.02 0.00 0.00 233016.56 20971.52 238675.87 00:21:38.120 [2024-12-09T14:54:33.348Z] =================================================================================================================== 00:21:38.120 [2024-12-09T14:54:33.348Z] Total : 2784.80 174.05 0.00 0.00 199551.48 3916.56 238675.87 00:21:38.379 15:54:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2056463 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:39.315 rmmod nvme_tcp 00:21:39.315 rmmod nvme_fabrics 00:21:39.315 rmmod nvme_keyring 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2056463 ']' 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2056463 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2056463 ']' 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2056463 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.315 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2056463 00:21:39.575 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:39.575 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:39.575 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2056463' 00:21:39.575 killing process with pid 2056463 00:21:39.575 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2056463 00:21:39.575 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2056463 00:21:39.834 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:39.834 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:39.834 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:39.834 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:39.834 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:39.834 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:39.834 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:39.834 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:39.834 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:39.834 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.834 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.834 15:54:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:42.370 00:21:42.370 real 0m7.150s 00:21:42.370 user 0m20.012s 00:21:42.370 sys 0m1.253s 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:42.370 ************************************ 00:21:42.370 END TEST nvmf_shutdown_tc2 00:21:42.370 ************************************ 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:42.370 ************************************ 00:21:42.370 START TEST nvmf_shutdown_tc3 00:21:42.370 ************************************ 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:42.370 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:42.370 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:42.370 Found net devices under 0000:af:00.0: cvl_0_0 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.370 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:42.371 Found net devices under 0000:af:00.1: cvl_0_1 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:42.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:21:42.371 00:21:42.371 --- 10.0.0.2 ping statistics --- 00:21:42.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.371 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:21:42.371 00:21:42.371 --- 10.0.0.1 ping statistics --- 00:21:42.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.371 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2057762 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2057762 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2057762 ']' 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.371 15:54:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.371 [2024-12-09 15:54:37.492889] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:21:42.371 [2024-12-09 15:54:37.492941] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.371 [2024-12-09 15:54:37.572271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:42.630 [2024-12-09 15:54:37.614033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.630 [2024-12-09 15:54:37.614071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.630 [2024-12-09 15:54:37.614079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.630 [2024-12-09 15:54:37.614085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.630 [2024-12-09 15:54:37.614089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.630 [2024-12-09 15:54:37.615554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.630 [2024-12-09 15:54:37.615665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.630 [2024-12-09 15:54:37.615772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.630 [2024-12-09 15:54:37.615773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:43.197 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.197 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:43.197 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:43.197 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:43.197 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.197 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.197 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:43.197 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.197 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.197 [2024-12-09 15:54:38.375268] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.198 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:43.457 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:43.457 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:43.457 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:43.457 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.457 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.457 Malloc1 00:21:43.457 [2024-12-09 15:54:38.481689] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.457 Malloc2 00:21:43.457 Malloc3 00:21:43.457 Malloc4 00:21:43.457 Malloc5 00:21:43.457 Malloc6 00:21:43.716 Malloc7 00:21:43.716 Malloc8 00:21:43.716 Malloc9 00:21:43.716 Malloc10 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2058042 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2058042 /var/tmp/bdevperf.sock 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2058042 ']' 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.716 { 00:21:43.716 "params": { 00:21:43.716 "name": "Nvme$subsystem", 00:21:43.716 "trtype": "$TEST_TRANSPORT", 00:21:43.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.716 "adrfam": "ipv4", 00:21:43.716 "trsvcid": "$NVMF_PORT", 00:21:43.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.716 "hdgst": ${hdgst:-false}, 00:21:43.716 "ddgst": ${ddgst:-false} 00:21:43.716 }, 00:21:43.716 "method": "bdev_nvme_attach_controller" 00:21:43.716 } 00:21:43.716 EOF 00:21:43.716 )") 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.716 { 00:21:43.716 "params": { 00:21:43.716 "name": "Nvme$subsystem", 00:21:43.716 "trtype": "$TEST_TRANSPORT", 00:21:43.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.716 "adrfam": "ipv4", 00:21:43.716 "trsvcid": "$NVMF_PORT", 00:21:43.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.716 "hdgst": ${hdgst:-false}, 00:21:43.716 "ddgst": ${ddgst:-false} 00:21:43.716 }, 00:21:43.716 "method": "bdev_nvme_attach_controller" 00:21:43.716 } 00:21:43.716 EOF 00:21:43.716 )") 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.716 { 00:21:43.716 "params": { 00:21:43.716 "name": "Nvme$subsystem", 00:21:43.716 "trtype": "$TEST_TRANSPORT", 00:21:43.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.716 "adrfam": "ipv4", 00:21:43.716 "trsvcid": "$NVMF_PORT", 00:21:43.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.716 "hdgst": ${hdgst:-false}, 00:21:43.716 "ddgst": ${ddgst:-false} 00:21:43.716 }, 00:21:43.716 "method": "bdev_nvme_attach_controller" 00:21:43.716 } 00:21:43.716 EOF 00:21:43.716 )") 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.716 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.716 { 00:21:43.716 "params": { 00:21:43.716 "name": "Nvme$subsystem", 00:21:43.716 "trtype": "$TEST_TRANSPORT", 00:21:43.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.716 "adrfam": "ipv4", 00:21:43.716 "trsvcid": "$NVMF_PORT", 00:21:43.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.716 "hdgst": ${hdgst:-false}, 00:21:43.716 "ddgst": ${ddgst:-false} 00:21:43.716 }, 00:21:43.716 "method": "bdev_nvme_attach_controller" 00:21:43.716 } 00:21:43.716 EOF 00:21:43.716 )") 00:21:43.975 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.975 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.975 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.975 { 00:21:43.975 "params": { 00:21:43.975 "name": "Nvme$subsystem", 00:21:43.975 "trtype": "$TEST_TRANSPORT", 00:21:43.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.975 "adrfam": "ipv4", 00:21:43.975 "trsvcid": "$NVMF_PORT", 00:21:43.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.976 "hdgst": ${hdgst:-false}, 00:21:43.976 "ddgst": ${ddgst:-false} 00:21:43.976 }, 00:21:43.976 "method": "bdev_nvme_attach_controller" 00:21:43.976 } 00:21:43.976 EOF 00:21:43.976 )") 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.976 { 00:21:43.976 "params": { 00:21:43.976 "name": "Nvme$subsystem", 00:21:43.976 "trtype": "$TEST_TRANSPORT", 00:21:43.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.976 "adrfam": "ipv4", 00:21:43.976 "trsvcid": "$NVMF_PORT", 00:21:43.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.976 "hdgst": ${hdgst:-false}, 00:21:43.976 "ddgst": ${ddgst:-false} 00:21:43.976 }, 00:21:43.976 "method": "bdev_nvme_attach_controller" 00:21:43.976 } 00:21:43.976 EOF 00:21:43.976 )") 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.976 { 00:21:43.976 "params": { 00:21:43.976 "name": "Nvme$subsystem", 00:21:43.976 "trtype": "$TEST_TRANSPORT", 00:21:43.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.976 "adrfam": "ipv4", 00:21:43.976 "trsvcid": "$NVMF_PORT", 00:21:43.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.976 "hdgst": ${hdgst:-false}, 00:21:43.976 "ddgst": ${ddgst:-false} 00:21:43.976 }, 00:21:43.976 "method": "bdev_nvme_attach_controller" 00:21:43.976 } 00:21:43.976 EOF 00:21:43.976 )") 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.976 [2024-12-09 15:54:38.962827] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:21:43.976 [2024-12-09 15:54:38.962877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2058042 ] 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.976 { 00:21:43.976 "params": { 00:21:43.976 "name": "Nvme$subsystem", 00:21:43.976 "trtype": "$TEST_TRANSPORT", 00:21:43.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.976 "adrfam": "ipv4", 00:21:43.976 "trsvcid": "$NVMF_PORT", 00:21:43.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.976 "hdgst": ${hdgst:-false}, 00:21:43.976 "ddgst": ${ddgst:-false} 00:21:43.976 }, 00:21:43.976 "method": "bdev_nvme_attach_controller" 00:21:43.976 } 00:21:43.976 EOF 00:21:43.976 )") 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.976 { 00:21:43.976 "params": { 00:21:43.976 "name": "Nvme$subsystem", 00:21:43.976 "trtype": "$TEST_TRANSPORT", 00:21:43.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.976 "adrfam": "ipv4", 00:21:43.976 "trsvcid": "$NVMF_PORT", 00:21:43.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.976 "hdgst": ${hdgst:-false}, 00:21:43.976 "ddgst": ${ddgst:-false} 00:21:43.976 }, 00:21:43.976 "method": "bdev_nvme_attach_controller" 00:21:43.976 } 00:21:43.976 EOF 00:21:43.976 )") 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.976 { 00:21:43.976 "params": { 00:21:43.976 "name": "Nvme$subsystem", 00:21:43.976 "trtype": "$TEST_TRANSPORT", 00:21:43.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.976 "adrfam": "ipv4", 00:21:43.976 "trsvcid": "$NVMF_PORT", 00:21:43.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.976 "hdgst": ${hdgst:-false}, 00:21:43.976 "ddgst": ${ddgst:-false} 00:21:43.976 }, 00:21:43.976 "method": "bdev_nvme_attach_controller" 00:21:43.976 } 00:21:43.976 EOF 00:21:43.976 )") 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:43.976 15:54:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:43.976 "params": { 00:21:43.976 "name": "Nvme1", 00:21:43.976 "trtype": "tcp", 00:21:43.976 "traddr": "10.0.0.2", 00:21:43.976 "adrfam": "ipv4", 00:21:43.976 "trsvcid": "4420", 00:21:43.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.976 "hdgst": false, 00:21:43.976 "ddgst": false 00:21:43.976 }, 00:21:43.976 "method": "bdev_nvme_attach_controller" 00:21:43.976 },{ 00:21:43.976 "params": { 00:21:43.976 "name": "Nvme2", 00:21:43.976 "trtype": "tcp", 00:21:43.976 "traddr": "10.0.0.2", 00:21:43.976 "adrfam": "ipv4", 00:21:43.976 "trsvcid": "4420", 00:21:43.976 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:43.976 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:43.976 "hdgst": false, 00:21:43.976 "ddgst": false 00:21:43.976 }, 00:21:43.976 "method": "bdev_nvme_attach_controller" 00:21:43.976 },{ 00:21:43.976 "params": { 00:21:43.976 "name": "Nvme3", 00:21:43.976 "trtype": "tcp", 00:21:43.976 "traddr": "10.0.0.2", 00:21:43.976 "adrfam": "ipv4", 00:21:43.976 "trsvcid": "4420", 00:21:43.976 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:43.976 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:43.976 "hdgst": false, 00:21:43.976 "ddgst": false 00:21:43.976 }, 00:21:43.976 "method": "bdev_nvme_attach_controller" 00:21:43.976 },{ 00:21:43.976 "params": { 00:21:43.976 "name": "Nvme4", 00:21:43.976 "trtype": "tcp", 00:21:43.976 "traddr": "10.0.0.2", 00:21:43.976 "adrfam": "ipv4", 00:21:43.976 "trsvcid": "4420", 00:21:43.976 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:43.976 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:43.976 "hdgst": false, 00:21:43.976 "ddgst": false 00:21:43.976 }, 00:21:43.976 "method": "bdev_nvme_attach_controller" 00:21:43.976 },{ 00:21:43.976 "params": { 00:21:43.976 "name": "Nvme5", 00:21:43.976 "trtype": "tcp", 00:21:43.976 "traddr": "10.0.0.2", 00:21:43.976 "adrfam": "ipv4", 00:21:43.976 "trsvcid": "4420", 00:21:43.976 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:43.976 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:43.976 "hdgst": false, 00:21:43.976 "ddgst": false 00:21:43.976 }, 00:21:43.976 "method": "bdev_nvme_attach_controller" 00:21:43.976 },{ 00:21:43.976 "params": { 00:21:43.976 "name": "Nvme6", 00:21:43.976 "trtype": "tcp", 00:21:43.976 "traddr": "10.0.0.2", 00:21:43.976 "adrfam": "ipv4", 00:21:43.976 "trsvcid": "4420", 00:21:43.976 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:43.976 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:43.976 "hdgst": false, 00:21:43.976 "ddgst": false 00:21:43.976 }, 00:21:43.976 "method": "bdev_nvme_attach_controller" 00:21:43.976 },{ 00:21:43.976 "params": { 00:21:43.976 "name": "Nvme7", 00:21:43.976 "trtype": "tcp", 00:21:43.976 "traddr": "10.0.0.2", 00:21:43.976 "adrfam": "ipv4", 00:21:43.976 "trsvcid": "4420", 00:21:43.976 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:43.976 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:43.976 "hdgst": false, 00:21:43.976 "ddgst": false 00:21:43.976 }, 00:21:43.976 "method": "bdev_nvme_attach_controller" 00:21:43.976 },{ 00:21:43.976 "params": { 00:21:43.976 "name": "Nvme8", 00:21:43.976 "trtype": "tcp", 00:21:43.976 "traddr": "10.0.0.2", 00:21:43.976 "adrfam": "ipv4", 00:21:43.976 "trsvcid": "4420", 00:21:43.976 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:43.976 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:43.976 "hdgst": false, 00:21:43.976 "ddgst": false 00:21:43.976 }, 00:21:43.976 "method": "bdev_nvme_attach_controller" 00:21:43.976 },{ 00:21:43.976 "params": { 00:21:43.976 "name": "Nvme9", 00:21:43.976 "trtype": "tcp", 00:21:43.976 "traddr": "10.0.0.2", 00:21:43.976 "adrfam": "ipv4", 00:21:43.976 "trsvcid": "4420", 00:21:43.976 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:43.976 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:43.976 "hdgst": false, 00:21:43.976 "ddgst": false 00:21:43.976 }, 00:21:43.976 "method": "bdev_nvme_attach_controller" 00:21:43.976 },{ 00:21:43.976 "params": { 00:21:43.976 "name": "Nvme10", 00:21:43.976 "trtype": "tcp", 00:21:43.976 "traddr": "10.0.0.2", 00:21:43.977 "adrfam": "ipv4", 00:21:43.977 "trsvcid": "4420", 00:21:43.977 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:43.977 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:43.977 "hdgst": false, 00:21:43.977 "ddgst": false 00:21:43.977 }, 00:21:43.977 "method": "bdev_nvme_attach_controller" 00:21:43.977 }' 00:21:43.977 [2024-12-09 15:54:39.039865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.977 [2024-12-09 15:54:39.079757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.881 Running I/O for 10 seconds... 00:21:45.881 15:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.881 15:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:45.881 15:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:45.881 15:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.881 15:54:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.881 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.881 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:45.881 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:45.881 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:45.881 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:45.881 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:45.881 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:45.881 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:45.881 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:45.881 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:45.881 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:45.881 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.881 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.881 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.881 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:45.881 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:45.881 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:46.193 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:46.193 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:46.193 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:46.193 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:46.193 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.193 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:46.193 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.193 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:46.193 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:46.193 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2057762 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2057762 ']' 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2057762 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2057762 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2057762' 00:21:46.515 killing process with pid 2057762 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2057762 00:21:46.515 15:54:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2057762 00:21:46.515 [2024-12-09 15:54:41.702518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.702999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a2f50 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.707962] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:46.516 [2024-12-09 15:54:41.712468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.712893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5b40 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.516 [2024-12-09 15:54:41.714877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.714997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.715003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.715009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.715015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.715021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.715029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.715036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.715042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.715048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.715055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3440 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.716652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3910 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.717998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.718004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.718011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.718018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.718024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.718030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.718036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.718042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.718047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.718053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a3e00 is same with the state(6) to be set 00:21:46.517 [2024-12-09 15:54:41.718258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.517 [2024-12-09 15:54:41.718287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.517 [2024-12-09 15:54:41.718297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.517 [2024-12-09 15:54:41.718304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.517 [2024-12-09 15:54:41.718311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.517 [2024-12-09 15:54:41.718318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.517 [2024-12-09 15:54:41.718328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.517 [2024-12-09 15:54:41.718339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.517 [2024-12-09 15:54:41.718349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf16700 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.518 [2024-12-09 15:54:41.718410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.718422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.518 [2024-12-09 15:54:41.718429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.718437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.518 [2024-12-09 15:54:41.718443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.718452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.518 [2024-12-09 15:54:41.718465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.718475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48500 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.518 [2024-12-09 15:54:41.718534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.718542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.518 [2024-12-09 15:54:41.718549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.718559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.518 [2024-12-09 15:54:41.718570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.718581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.518 [2024-12-09 15:54:41.718588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.718595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaea750 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.518 [2024-12-09 15:54:41.718632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.718640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.518 [2024-12-09 15:54:41.718647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.718655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.518 [2024-12-09 15:54:41.718666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.718678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.518 [2024-12-09 15:54:41.718687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.718681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xade790 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-09 15:54:41.718723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:46.518 the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with [2024-12-09 15:54:41.718736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:21:46.518 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.718745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.518 [2024-12-09 15:54:41.718753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.718760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.518 [2024-12-09 15:54:41.718775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.718783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.518 [2024-12-09 15:54:41.718790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.718798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with [2024-12-09 15:54:41.718804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae98c0 is same wthe state(6) to be set 00:21:46.518 ith the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.718840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a42f0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 [2024-12-09 15:54:41.719696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 15:54:41.719709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:12[2024-12-09 15:54:41.719733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.719749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 [2024-12-09 15:54:41.719763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.719770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:12[2024-12-09 15:54:41.719778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.719795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 [2024-12-09 15:54:41.719808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.719816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 [2024-12-09 15:54:41.719830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.719838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 [2024-12-09 15:54:41.719854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 15:54:41.719861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 [2024-12-09 15:54:41.719878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.719885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with [2024-12-09 15:54:41.719892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:1the state(6) to be set 00:21:46.518 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 [2024-12-09 15:54:41.719902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.719909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 [2024-12-09 15:54:41.719923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.719930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:1[2024-12-09 15:54:41.719937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.518 [2024-12-09 15:54:41.719954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.518 [2024-12-09 15:54:41.719962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.518 [2024-12-09 15:54:41.719968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.719970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.719978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.719982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.719985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.719990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 15:54:41.719993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a47e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:1[2024-12-09 15:54:41.720665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:1[2024-12-09 15:54:41.720691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 15:54:41.720700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 15:54:41.720723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:1[2024-12-09 15:54:41.720754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with [2024-12-09 15:54:41.720761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:46.519 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with [2024-12-09 15:54:41.720776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:1the state(6) to be set 00:21:46.519 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with [2024-12-09 15:54:41.720788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:46.519 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with [2024-12-09 15:54:41.720831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:46.519 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with [2024-12-09 15:54:41.720875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:46.519 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with [2024-12-09 15:54:41.720923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:46.519 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 15:54:41.720947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.720984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.720992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.720998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.721003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.721005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.721011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 [2024-12-09 15:54:41.721013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.721021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.721022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.519 [2024-12-09 15:54:41.721029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.721035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 15:54:41.721036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.519 the state(6) to be set 00:21:46.519 [2024-12-09 15:54:41.721045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4cb0 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.721999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.722313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5180 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.724287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:46.520 [2024-12-09 15:54:41.724345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae7140 (9): Bad file descriptor 00:21:46.520 [2024-12-09 15:54:41.724823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.520 [2024-12-09 15:54:41.724844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.520 [2024-12-09 15:54:41.724860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.520 [2024-12-09 15:54:41.724867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.520 [2024-12-09 15:54:41.724876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.520 [2024-12-09 15:54:41.724884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.520 [2024-12-09 15:54:41.724893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.520 [2024-12-09 15:54:41.724900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.520 [2024-12-09 15:54:41.724908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.520 [2024-12-09 15:54:41.724915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.520 [2024-12-09 15:54:41.724923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.520 [2024-12-09 15:54:41.724930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.520 [2024-12-09 15:54:41.724939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2304b20 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.725065] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:46.520 [2024-12-09 15:54:41.725111] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:46.520 [2024-12-09 15:54:41.725153] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:46.520 [2024-12-09 15:54:41.726431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:46.520 [2024-12-09 15:54:41.726476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:46.520 [2024-12-09 15:54:41.726494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48500 (9): Bad file descriptor 00:21:46.520 [2024-12-09 15:54:41.726505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf58210 (9): Bad file descriptor 00:21:46.520 [2024-12-09 15:54:41.726617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.520 [2024-12-09 15:54:41.726631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae7140 with addr=10.0.0.2, port=4420 00:21:46.520 [2024-12-09 15:54:41.726640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae7140 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.726724] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:46.520 [2024-12-09 15:54:41.726884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae7140 (9): Bad file descriptor 00:21:46.520 [2024-12-09 15:54:41.726980] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:46.520 [2024-12-09 15:54:41.727375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.520 [2024-12-09 15:54:41.727393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf58210 with addr=10.0.0.2, port=4420 00:21:46.520 [2024-12-09 15:54:41.727405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf58210 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.727494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.520 [2024-12-09 15:54:41.727504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48500 with addr=10.0.0.2, port=4420 00:21:46.520 [2024-12-09 15:54:41.727512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48500 is same with the state(6) to be set 00:21:46.520 [2024-12-09 15:54:41.727520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:46.520 [2024-12-09 15:54:41.727526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:46.520 [2024-12-09 15:54:41.727534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:46.520 [2024-12-09 15:54:41.727543] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:46.520 [2024-12-09 15:54:41.727624] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:46.520 [2024-12-09 15:54:41.727647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf58210 (9): Bad file descriptor 00:21:46.520 [2024-12-09 15:54:41.727658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48500 (9): Bad file descriptor 00:21:46.520 [2024-12-09 15:54:41.727702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:46.520 [2024-12-09 15:54:41.727711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:46.520 [2024-12-09 15:54:41.727718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:46.520 [2024-12-09 15:54:41.727725] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:46.520 [2024-12-09 15:54:41.727732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:46.520 [2024-12-09 15:54:41.727738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:46.520 [2024-12-09 15:54:41.727749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:46.520 [2024-12-09 15:54:41.727756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:46.803 [2024-12-09 15:54:41.728277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.803 [2024-12-09 15:54:41.728293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.803 [2024-12-09 15:54:41.728302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.803 [2024-12-09 15:54:41.728310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.803 [2024-12-09 15:54:41.728318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.803 [2024-12-09 15:54:41.728330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.803 [2024-12-09 15:54:41.728341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.803 [2024-12-09 15:54:41.728349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.803 [2024-12-09 15:54:41.728356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf58030 is same with the state(6) to be set 00:21:46.803 [2024-12-09 15:54:41.728384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf16700 (9): Bad file descriptor 00:21:46.803 [2024-12-09 15:54:41.728418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.803 [2024-12-09 15:54:41.728428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.803 [2024-12-09 15:54:41.728438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.803 [2024-12-09 15:54:41.728449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.803 [2024-12-09 15:54:41.728459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.803 [2024-12-09 15:54:41.728467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.803 [2024-12-09 15:54:41.728475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.804 [2024-12-09 15:54:41.728481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.728488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf16310 is same with the state(6) to be set 00:21:46.804 [2024-12-09 15:54:41.728522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.804 [2024-12-09 15:54:41.728534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.728541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.804 [2024-12-09 15:54:41.728551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.728563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.804 [2024-12-09 15:54:41.728574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.728582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.804 [2024-12-09 15:54:41.728588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.728595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff610 is same with the state(6) to be set 00:21:46.804 [2024-12-09 15:54:41.728617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaea750 (9): Bad file descriptor 00:21:46.804 [2024-12-09 15:54:41.728632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xade790 (9): Bad file descriptor 00:21:46.804 [2024-12-09 15:54:41.728647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae98c0 (9): Bad file descriptor 00:21:46.804 [2024-12-09 15:54:41.735125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:46.804 [2024-12-09 15:54:41.735341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.804 [2024-12-09 15:54:41.735361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae7140 with addr=10.0.0.2, port=4420 00:21:46.804 [2024-12-09 15:54:41.735370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae7140 is same with the state(6) to be set 00:21:46.804 [2024-12-09 15:54:41.735411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae7140 (9): Bad file descriptor 00:21:46.804 [2024-12-09 15:54:41.735455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:46.804 [2024-12-09 15:54:41.735466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:46.804 [2024-12-09 15:54:41.735474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:46.804 [2024-12-09 15:54:41.735482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:46.804 [2024-12-09 15:54:41.736966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:46.804 [2024-12-09 15:54:41.737020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:46.804 [2024-12-09 15:54:41.737259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.804 [2024-12-09 15:54:41.737275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48500 with addr=10.0.0.2, port=4420 00:21:46.804 [2024-12-09 15:54:41.737284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48500 is same with the state(6) to be set 00:21:46.804 [2024-12-09 15:54:41.737536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.804 [2024-12-09 15:54:41.737551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf58210 with addr=10.0.0.2, port=4420 00:21:46.804 [2024-12-09 15:54:41.737560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf58210 is same with the state(6) to be set 00:21:46.804 [2024-12-09 15:54:41.737574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48500 (9): Bad file descriptor 00:21:46.804 [2024-12-09 15:54:41.737615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf58210 (9): Bad file descriptor 00:21:46.804 [2024-12-09 15:54:41.737630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:46.804 [2024-12-09 15:54:41.737640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:46.804 [2024-12-09 15:54:41.737648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:46.804 [2024-12-09 15:54:41.737656] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:46.804 [2024-12-09 15:54:41.737692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:46.804 [2024-12-09 15:54:41.737701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:46.804 [2024-12-09 15:54:41.737708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:46.804 [2024-12-09 15:54:41.737714] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:46.804 [2024-12-09 15:54:41.738303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf58030 (9): Bad file descriptor 00:21:46.804 [2024-12-09 15:54:41.738331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf16310 (9): Bad file descriptor 00:21:46.804 [2024-12-09 15:54:41.738353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ff610 (9): Bad file descriptor 00:21:46.804 [2024-12-09 15:54:41.738474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.804 [2024-12-09 15:54:41.738862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.804 [2024-12-09 15:54:41.738868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.738880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.738891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.738901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.738908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.738917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.738925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.738938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.738947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.738959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.738966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.738974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.738985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.738998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.805 [2024-12-09 15:54:41.739659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.805 [2024-12-09 15:54:41.739669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.739676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.739685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.739695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.739708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.739717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.739726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.739733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.739740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcee6c0 is same with the state(6) to be set 00:21:46.806 [2024-12-09 15:54:41.740828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.740850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.740864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.740872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.740882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.740892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.740907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.740916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.740926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.740934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.740944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.740956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.740967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.740975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.740984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.740990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.806 [2024-12-09 15:54:41.741551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.806 [2024-12-09 15:54:41.741563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.741978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.741992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.742001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.742010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.742018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.742027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.742038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.742050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.742057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.742066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.742075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.742085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.742096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.742106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef6f0 is same with the state(6) to be set 00:21:46.807 [2024-12-09 15:54:41.743127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.743144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.743157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.743166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.743175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.743183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.743191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.743198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.743207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.743214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.743230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.743238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.743247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.743253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.743262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.743270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.743279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.743286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.743295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.743302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.743311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.743321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.743331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.807 [2024-12-09 15:54:41.743338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.807 [2024-12-09 15:54:41.743346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.808 [2024-12-09 15:54:41.743881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.808 [2024-12-09 15:54:41.743887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.743895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.743902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.743912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.743919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.743927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.743933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.743941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.743948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.743956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.743963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.743972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.743979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.743987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.743994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.744002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.744008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.744017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.744023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.744031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.744037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.744046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.744053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.744060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.744068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.744079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.744085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.744095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.744103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.744111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.744119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.744129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.744136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.744144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.744151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.744158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf0720 is same with the state(6) to be set 00:21:46.809 [2024-12-09 15:54:41.745150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.809 [2024-12-09 15:54:41.745536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.809 [2024-12-09 15:54:41.745543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.745990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.745998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.746006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.746014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.746020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.746029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.746036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.746044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.746051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.746060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.746067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.746075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.746082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.746091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.746099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.746107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.746113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.746124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.746130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.746139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.746145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.746154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.746161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.746169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.810 [2024-12-09 15:54:41.746177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.810 [2024-12-09 15:54:41.746186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.746193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.746201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeb60 is same with the state(6) to be set 00:21:46.811 [2024-12-09 15:54:41.747180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:46.811 [2024-12-09 15:54:41.747198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:46.811 [2024-12-09 15:54:41.747210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:46.811 [2024-12-09 15:54:41.747226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:46.811 [2024-12-09 15:54:41.747598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.811 [2024-12-09 15:54:41.747616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaea750 with addr=10.0.0.2, port=4420 00:21:46.811 [2024-12-09 15:54:41.747624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaea750 is same with the state(6) to be set 00:21:46.811 [2024-12-09 15:54:41.747848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.811 [2024-12-09 15:54:41.747859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xade790 with addr=10.0.0.2, port=4420 00:21:46.811 [2024-12-09 15:54:41.747867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xade790 is same with the state(6) to be set 00:21:46.811 [2024-12-09 15:54:41.747952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.811 [2024-12-09 15:54:41.747962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae98c0 with addr=10.0.0.2, port=4420 00:21:46.811 [2024-12-09 15:54:41.747969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae98c0 is same with the state(6) to be set 00:21:46.811 [2024-12-09 15:54:41.748186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.811 [2024-12-09 15:54:41.748197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf16700 with addr=10.0.0.2, port=4420 00:21:46.811 [2024-12-09 15:54:41.748204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf16700 is same with the state(6) to be set 00:21:46.811 [2024-12-09 15:54:41.749108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:46.811 [2024-12-09 15:54:41.749130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:46.811 [2024-12-09 15:54:41.749140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:46.811 [2024-12-09 15:54:41.749167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaea750 (9): Bad file descriptor 00:21:46.811 [2024-12-09 15:54:41.749178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xade790 (9): Bad file descriptor 00:21:46.811 [2024-12-09 15:54:41.749187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae98c0 (9): Bad file descriptor 00:21:46.811 [2024-12-09 15:54:41.749197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf16700 (9): Bad file descriptor 00:21:46.811 [2024-12-09 15:54:41.749462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.811 [2024-12-09 15:54:41.749482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae7140 with addr=10.0.0.2, port=4420 00:21:46.811 [2024-12-09 15:54:41.749492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae7140 is same with the state(6) to be set 00:21:46.811 [2024-12-09 15:54:41.749622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.811 [2024-12-09 15:54:41.749633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48500 with addr=10.0.0.2, port=4420 00:21:46.811 [2024-12-09 15:54:41.749640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48500 is same with the state(6) to be set 00:21:46.811 [2024-12-09 15:54:41.749772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.811 [2024-12-09 15:54:41.749782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf58210 with addr=10.0.0.2, port=4420 00:21:46.811 [2024-12-09 15:54:41.749790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf58210 is same with the state(6) to be set 00:21:46.811 [2024-12-09 15:54:41.749798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:46.811 [2024-12-09 15:54:41.749804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:46.811 [2024-12-09 15:54:41.749812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:46.811 [2024-12-09 15:54:41.749820] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:46.811 [2024-12-09 15:54:41.749829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:46.811 [2024-12-09 15:54:41.749835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:46.811 [2024-12-09 15:54:41.749841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:46.811 [2024-12-09 15:54:41.749848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:46.811 [2024-12-09 15:54:41.749855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:46.811 [2024-12-09 15:54:41.749862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:46.811 [2024-12-09 15:54:41.749869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:46.811 [2024-12-09 15:54:41.749875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:46.811 [2024-12-09 15:54:41.749882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:46.811 [2024-12-09 15:54:41.749889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:46.811 [2024-12-09 15:54:41.749899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:46.811 [2024-12-09 15:54:41.749905] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:46.811 [2024-12-09 15:54:41.749977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.749988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.750009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.750025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.750041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.750058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.750074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.750091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.750107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.750122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.750139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.750155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.750174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.750190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.750206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.750229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.750245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.750262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.811 [2024-12-09 15:54:41.750278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.811 [2024-12-09 15:54:41.750287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.812 [2024-12-09 15:54:41.750784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.812 [2024-12-09 15:54:41.750793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.750799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.750808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.750815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.750825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.750832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.750841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.750848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.750857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.750864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.750873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.750880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.750888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.750895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.750904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.750911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.750920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.750936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.750944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.750951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.750961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.750967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.750977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.750986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.750995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.751002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.751010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.751017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.751025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef10c0 is same with the state(6) to be set 00:21:46.813 [2024-12-09 15:54:41.752008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.813 [2024-12-09 15:54:41.752434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.813 [2024-12-09 15:54:41.752443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.752989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.752998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.753005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.753013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.753020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.753028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.753036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.753043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bebb90 is same with the state(6) to be set 00:21:46.814 [2024-12-09 15:54:41.754027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.754042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.754053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.814 [2024-12-09 15:54:41.754061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.814 [2024-12-09 15:54:41.754070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.815 [2024-12-09 15:54:41.754721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.815 [2024-12-09 15:54:41.754728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.754737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.754744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.754752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.754759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.754767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.754776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.754784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.754791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.754800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.754808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.754817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.754827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.754835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.754842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.754851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.754858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.754866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.754874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.754882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.754889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.754898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.754906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.754914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.754921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.754929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.754936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.754945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.754951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.754960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.754967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.754977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.754985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.754994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.755001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.755009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.755016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.755025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.755032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.755040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.755047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.755056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.816 [2024-12-09 15:54:41.755064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.816 [2024-12-09 15:54:41.755072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e391c0 is same with the state(6) to be set 00:21:46.816 [2024-12-09 15:54:41.756019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:46.816 [2024-12-09 15:54:41.756036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:46.816 task offset: 33408 on job bdev=Nvme5n1 fails 00:21:46.816 00:21:46.816 Latency(us) 00:21:46.816 [2024-12-09T14:54:42.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.816 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.816 Job: Nvme1n1 ended in about 0.90 seconds with error 00:21:46.816 Verification LBA range: start 0x0 length 0x400 00:21:46.816 Nvme1n1 : 0.90 213.90 13.37 71.30 0.00 222169.72 16352.79 215707.06 00:21:46.816 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.816 Job: Nvme2n1 ended in about 0.90 seconds with error 00:21:46.816 Verification LBA range: start 0x0 length 0x400 00:21:46.816 Nvme2n1 : 0.90 213.34 13.33 71.11 0.00 218869.76 17351.44 219701.64 00:21:46.816 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.816 Job: Nvme3n1 ended in about 0.90 seconds with error 00:21:46.816 Verification LBA range: start 0x0 length 0x400 00:21:46.816 Nvme3n1 : 0.90 212.86 13.30 70.95 0.00 215465.45 15042.07 217704.35 00:21:46.816 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.816 Job: Nvme4n1 ended in about 0.90 seconds with error 00:21:46.816 Verification LBA range: start 0x0 length 0x400 00:21:46.816 Nvme4n1 : 0.90 212.38 13.27 70.79 0.00 212112.94 15603.81 210713.84 00:21:46.816 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.816 Job: Nvme5n1 ended in about 0.88 seconds with error 00:21:46.816 Verification LBA range: start 0x0 length 0x400 00:21:46.816 Nvme5n1 : 0.88 287.69 17.98 72.77 0.00 163107.93 3869.74 211712.49 00:21:46.816 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.816 Job: Nvme6n1 ended in about 0.91 seconds with error 00:21:46.816 Verification LBA range: start 0x0 length 0x400 00:21:46.816 Nvme6n1 : 0.91 211.26 13.20 70.42 0.00 205524.60 21346.01 215707.06 00:21:46.816 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.816 Job: Nvme7n1 ended in about 0.91 seconds with error 00:21:46.816 Verification LBA range: start 0x0 length 0x400 00:21:46.816 Nvme7n1 : 0.91 215.18 13.45 70.26 0.00 199002.82 15728.64 211712.49 00:21:46.816 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.816 Job: Nvme8n1 ended in about 0.91 seconds with error 00:21:46.816 Verification LBA range: start 0x0 length 0x400 00:21:46.816 Nvme8n1 : 0.91 210.32 13.15 70.11 0.00 198808.50 14730.00 214708.42 00:21:46.816 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.816 Verification LBA range: start 0x0 length 0x400 00:21:46.816 Nvme9n1 : 0.88 217.99 13.62 0.00 0.00 249336.85 17101.78 225693.50 00:21:46.816 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.816 Job: Nvme10n1 ended in about 0.88 seconds with error 00:21:46.816 Verification LBA range: start 0x0 length 0x400 00:21:46.816 Nvme10n1 : 0.88 215.08 13.44 6.79 0.00 239675.79 20721.86 241671.80 00:21:46.816 [2024-12-09T14:54:42.044Z] =================================================================================================================== 00:21:46.816 [2024-12-09T14:54:42.044Z] Total : 2209.99 138.12 574.51 0.00 209584.38 3869.74 241671.80 00:21:46.816 [2024-12-09 15:54:41.786410] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:46.816 [2024-12-09 15:54:41.786459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:46.816 [2024-12-09 15:54:41.786517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae7140 (9): Bad file descriptor 00:21:46.816 [2024-12-09 15:54:41.786532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48500 (9): Bad file descriptor 00:21:46.816 [2024-12-09 15:54:41.786541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf58210 (9): Bad file descriptor 00:21:46.816 [2024-12-09 15:54:41.786912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.817 [2024-12-09 15:54:41.786934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf16310 with addr=10.0.0.2, port=4420 00:21:46.817 [2024-12-09 15:54:41.786945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf16310 is same with the state(6) to be set 00:21:46.817 [2024-12-09 15:54:41.787098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.817 [2024-12-09 15:54:41.787111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ff610 with addr=10.0.0.2, port=4420 00:21:46.817 [2024-12-09 15:54:41.787120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff610 is same with the state(6) to be set 00:21:46.817 [2024-12-09 15:54:41.787244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.817 [2024-12-09 15:54:41.787257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf58030 with addr=10.0.0.2, port=4420 00:21:46.817 [2024-12-09 15:54:41.787265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf58030 is same with the state(6) to be set 00:21:46.817 [2024-12-09 15:54:41.787274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:46.817 [2024-12-09 15:54:41.787281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:46.817 [2024-12-09 15:54:41.787289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:46.817 [2024-12-09 15:54:41.787298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:46.817 [2024-12-09 15:54:41.787312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:46.817 [2024-12-09 15:54:41.787318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:46.817 [2024-12-09 15:54:41.787326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:46.817 [2024-12-09 15:54:41.787332] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:46.817 [2024-12-09 15:54:41.787338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:46.817 [2024-12-09 15:54:41.787344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:46.817 [2024-12-09 15:54:41.787351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:46.817 [2024-12-09 15:54:41.787358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:46.817 [2024-12-09 15:54:41.788120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf16310 (9): Bad file descriptor 00:21:46.817 [2024-12-09 15:54:41.788137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ff610 (9): Bad file descriptor 00:21:46.817 [2024-12-09 15:54:41.788148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf58030 (9): Bad file descriptor 00:21:46.817 [2024-12-09 15:54:41.788194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:46.817 [2024-12-09 15:54:41.788206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:46.817 [2024-12-09 15:54:41.788215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:46.817 [2024-12-09 15:54:41.788229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:46.817 [2024-12-09 15:54:41.788237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:46.817 [2024-12-09 15:54:41.788246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:46.817 [2024-12-09 15:54:41.788254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:46.817 [2024-12-09 15:54:41.788295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:46.817 [2024-12-09 15:54:41.788303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:46.817 [2024-12-09 15:54:41.788309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:46.817 [2024-12-09 15:54:41.788316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:46.817 [2024-12-09 15:54:41.788325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:46.817 [2024-12-09 15:54:41.788331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:46.817 [2024-12-09 15:54:41.788338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:46.817 [2024-12-09 15:54:41.788343] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:46.817 [2024-12-09 15:54:41.788351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:46.817 [2024-12-09 15:54:41.788357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:46.817 [2024-12-09 15:54:41.788364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:46.817 [2024-12-09 15:54:41.788369] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:46.817 [2024-12-09 15:54:41.788666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.817 [2024-12-09 15:54:41.788682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf16700 with addr=10.0.0.2, port=4420 00:21:46.817 [2024-12-09 15:54:41.788690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf16700 is same with the state(6) to be set 00:21:46.817 [2024-12-09 15:54:41.788814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.817 [2024-12-09 15:54:41.788825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae98c0 with addr=10.0.0.2, port=4420 00:21:46.817 [2024-12-09 15:54:41.788833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae98c0 is same with the state(6) to be set 00:21:46.817 [2024-12-09 15:54:41.788961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.817 [2024-12-09 15:54:41.788974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xade790 with addr=10.0.0.2, port=4420 00:21:46.817 [2024-12-09 15:54:41.788981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xade790 is same with the state(6) to be set 00:21:46.817 [2024-12-09 15:54:41.789135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.817 [2024-12-09 15:54:41.789149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaea750 with addr=10.0.0.2, port=4420 00:21:46.817 [2024-12-09 15:54:41.789158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaea750 is same with the state(6) to be set 00:21:46.817 [2024-12-09 15:54:41.789234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.817 [2024-12-09 15:54:41.789248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf58210 with addr=10.0.0.2, port=4420 00:21:46.817 [2024-12-09 15:54:41.789258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf58210 is same with the state(6) to be set 00:21:46.817 [2024-12-09 15:54:41.789475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.817 [2024-12-09 15:54:41.789487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf48500 with addr=10.0.0.2, port=4420 00:21:46.817 [2024-12-09 15:54:41.789494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf48500 is same with the state(6) to be set 00:21:46.817 [2024-12-09 15:54:41.789569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.817 [2024-12-09 15:54:41.789581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae7140 with addr=10.0.0.2, port=4420 00:21:46.817 [2024-12-09 15:54:41.789588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae7140 is same with the state(6) to be set 00:21:46.817 [2024-12-09 15:54:41.789615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf16700 (9): Bad file descriptor 00:21:46.817 [2024-12-09 15:54:41.789628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae98c0 (9): Bad file descriptor 00:21:46.817 [2024-12-09 15:54:41.789638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xade790 (9): Bad file descriptor 00:21:46.817 [2024-12-09 15:54:41.789648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaea750 (9): Bad file descriptor 00:21:46.817 [2024-12-09 15:54:41.789659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf58210 (9): Bad file descriptor 00:21:46.817 [2024-12-09 15:54:41.789667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf48500 (9): Bad file descriptor 00:21:46.817 [2024-12-09 15:54:41.789676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae7140 (9): Bad file descriptor 00:21:46.817 [2024-12-09 15:54:41.789698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:46.817 [2024-12-09 15:54:41.789709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:46.817 [2024-12-09 15:54:41.789717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:46.817 [2024-12-09 15:54:41.789724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:46.817 [2024-12-09 15:54:41.789733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:46.817 [2024-12-09 15:54:41.789740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:46.817 [2024-12-09 15:54:41.789748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:46.817 [2024-12-09 15:54:41.789756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:46.817 [2024-12-09 15:54:41.789764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:46.817 [2024-12-09 15:54:41.789770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:46.817 [2024-12-09 15:54:41.789777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:46.817 [2024-12-09 15:54:41.789784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:46.817 [2024-12-09 15:54:41.789791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:46.817 [2024-12-09 15:54:41.789797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:46.817 [2024-12-09 15:54:41.789803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:46.817 [2024-12-09 15:54:41.789810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:46.817 [2024-12-09 15:54:41.789817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:46.817 [2024-12-09 15:54:41.789824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:46.818 [2024-12-09 15:54:41.789831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:46.818 [2024-12-09 15:54:41.789839] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:46.818 [2024-12-09 15:54:41.789847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:46.818 [2024-12-09 15:54:41.789854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:46.818 [2024-12-09 15:54:41.789860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:46.818 [2024-12-09 15:54:41.789866] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:46.818 [2024-12-09 15:54:41.789874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:46.818 [2024-12-09 15:54:41.789880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:46.818 [2024-12-09 15:54:41.789886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:46.818 [2024-12-09 15:54:41.789892] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:47.076 15:54:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:48.013 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2058042 00:21:48.013 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2058042 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2058042 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:48.014 rmmod nvme_tcp 00:21:48.014 rmmod nvme_fabrics 00:21:48.014 rmmod nvme_keyring 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2057762 ']' 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2057762 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2057762 ']' 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2057762 00:21:48.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2057762) - No such process 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2057762 is not found' 00:21:48.014 Process with pid 2057762 is not found 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.014 15:54:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:50.550 00:21:50.550 real 0m8.159s 00:21:50.550 user 0m20.868s 00:21:50.550 sys 0m1.426s 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:50.550 ************************************ 00:21:50.550 END TEST nvmf_shutdown_tc3 00:21:50.550 ************************************ 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:50.550 ************************************ 00:21:50.550 START TEST nvmf_shutdown_tc4 00:21:50.550 ************************************ 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.550 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:50.551 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:50.551 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:50.551 Found net devices under 0000:af:00.0: cvl_0_0 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:50.551 Found net devices under 0000:af:00.1: cvl_0_1 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:50.551 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:50.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:21:50.551 00:21:50.551 --- 10.0.0.2 ping statistics --- 00:21:50.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.552 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:50.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:21:50.552 00:21:50.552 --- 10.0.0.1 ping statistics --- 00:21:50.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.552 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2059300 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2059300 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2059300 ']' 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.552 15:54:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:50.552 [2024-12-09 15:54:45.709126] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:21:50.552 [2024-12-09 15:54:45.709166] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.811 [2024-12-09 15:54:45.786672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:50.811 [2024-12-09 15:54:45.827819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.811 [2024-12-09 15:54:45.827856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.811 [2024-12-09 15:54:45.827864] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.811 [2024-12-09 15:54:45.827870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.811 [2024-12-09 15:54:45.827875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.811 [2024-12-09 15:54:45.829466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.811 [2024-12-09 15:54:45.829577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:50.811 [2024-12-09 15:54:45.829684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.811 [2024-12-09 15:54:45.829686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:51.379 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.379 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:51.379 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:51.379 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:51.379 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:51.379 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.379 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:51.379 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.379 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:51.379 [2024-12-09 15:54:46.596642] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.379 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.379 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:51.379 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:51.379 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:51.379 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.638 15:54:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:51.638 Malloc1 00:21:51.638 [2024-12-09 15:54:46.713112] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.638 Malloc2 00:21:51.638 Malloc3 00:21:51.638 Malloc4 00:21:51.638 Malloc5 00:21:51.897 Malloc6 00:21:51.897 Malloc7 00:21:51.897 Malloc8 00:21:51.897 Malloc9 00:21:51.897 Malloc10 00:21:51.897 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.897 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:51.897 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:51.897 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:52.156 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2059579 00:21:52.156 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:52.156 15:54:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:52.156 [2024-12-09 15:54:47.221252] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:57.434 15:54:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:57.434 15:54:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2059300 00:21:57.434 15:54:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2059300 ']' 00:21:57.434 15:54:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2059300 00:21:57.434 15:54:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:57.434 15:54:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.434 15:54:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2059300 00:21:57.434 15:54:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:57.434 15:54:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:57.434 15:54:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2059300' 00:21:57.434 killing process with pid 2059300 00:21:57.434 15:54:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2059300 00:21:57.434 15:54:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2059300 00:21:57.434 Write completed with error (sct=0, sc=8) 00:21:57.434 Write completed with error (sct=0, sc=8) 00:21:57.434 starting I/O failed: -6 00:21:57.434 Write completed with error (sct=0, sc=8) 00:21:57.434 Write completed with error (sct=0, sc=8) 00:21:57.434 Write completed with error (sct=0, sc=8) 00:21:57.434 Write completed with error (sct=0, sc=8) 00:21:57.434 starting I/O failed: -6 00:21:57.434 Write completed with error (sct=0, sc=8) 00:21:57.434 Write completed with error (sct=0, sc=8) 00:21:57.434 Write completed with error (sct=0, sc=8) 00:21:57.434 Write completed with error (sct=0, sc=8) 00:21:57.434 starting I/O failed: -6 00:21:57.434 Write completed with error (sct=0, sc=8) 00:21:57.434 Write completed with error (sct=0, sc=8) 00:21:57.434 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 [2024-12-09 15:54:52.222070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 [2024-12-09 15:54:52.222937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 [2024-12-09 15:54:52.223947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.435 Write completed with error (sct=0, sc=8) 00:21:57.435 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 [2024-12-09 15:54:52.224427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4f6e0 is same with tstarting I/O failed: -6 00:21:57.436 he state(6) to be set 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 [2024-12-09 15:54:52.224468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4f6e0 is same with the state(6) to be set 00:21:57.436 starting I/O failed: -6 00:21:57.436 [2024-12-09 15:54:52.224476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4f6e0 is same with the state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.224483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4f6e0 is same with the state(6) to be set 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 [2024-12-09 15:54:52.224490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4f6e0 is same with the state(6) to be set 00:21:57.436 starting I/O failed: -6 00:21:57.436 [2024-12-09 15:54:52.224497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4f6e0 is same with the state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.224505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe4f6e0 is same with the state(6) to be set 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 [2024-12-09 15:54:52.225602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda8e0 is same with the state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.225616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda8e0 is same with the state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.225622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda8e0 is same with the state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.225629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda8e0 is same with the state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.225637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda8e0 is same with the state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.225644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda8e0 is same with the state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.225647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.436 NVMe io qpair process completion error 00:21:57.436 [2024-12-09 15:54:52.225962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdadd0 is same with the state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.225986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdadd0 is same with the state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.225995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdadd0 is same with the state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.226002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdadd0 is same with the state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.226009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdadd0 is same with tWrite completed with error (sct=0, sc=8) 00:21:57.436 he state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.226018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdadd0 is same with tstarting I/O failed: -6 00:21:57.436 he state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.226025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdadd0 is same with the state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.226031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdadd0 is same with the state(6) to be set 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 starting I/O failed: -6 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 Write completed with error (sct=0, sc=8) 00:21:57.436 [2024-12-09 15:54:52.226617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:57.436 [2024-12-09 15:54:52.226652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda410 is same with the state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.226675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda410 is same with the state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.226683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda410 is same with the state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.226689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda410 is same with the state(6) to be set 00:21:57.436 [2024-12-09 15:54:52.226695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda410 is same with the state(6) to be set 00:21:57.437 [2024-12-09 15:54:52.226701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda410 is same with the state(6) to be set 00:21:57.437 [2024-12-09 15:54:52.226707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda410 is same with the state(6) to be set 00:21:57.437 [2024-12-09 15:54:52.226715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda410 is same with the state(6) to be set 00:21:57.437 [2024-12-09 15:54:52.226721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda410 is same with the state(6) to be set 00:21:57.437 [2024-12-09 15:54:52.226727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda410 is same with the state(6) to be set 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 [2024-12-09 15:54:52.226733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda410 is same with the state(6) to be set 00:21:57.437 [2024-12-09 15:54:52.226739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda410 is same with the state(6) to be set 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 [2024-12-09 15:54:52.227512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 [2024-12-09 15:54:52.228495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.437 Write completed with error (sct=0, sc=8) 00:21:57.437 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 [2024-12-09 15:54:52.230031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.438 NVMe io qpair process completion error 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 [2024-12-09 15:54:52.230961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:57.438 starting I/O failed: -6 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 [2024-12-09 15:54:52.231846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.438 starting I/O failed: -6 00:21:57.438 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 [2024-12-09 15:54:52.232884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 [2024-12-09 15:54:52.234593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.439 NVMe io qpair process completion error 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 starting I/O failed: -6 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.439 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 [2024-12-09 15:54:52.235436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 [2024-12-09 15:54:52.236257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 [2024-12-09 15:54:52.237284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.440 Write completed with error (sct=0, sc=8) 00:21:57.440 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 [2024-12-09 15:54:52.239359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.441 NVMe io qpair process completion error 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 [2024-12-09 15:54:52.240403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 [2024-12-09 15:54:52.241275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.441 Write completed with error (sct=0, sc=8) 00:21:57.441 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 [2024-12-09 15:54:52.242271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 [2024-12-09 15:54:52.244158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.442 NVMe io qpair process completion error 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 starting I/O failed: -6 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 [2024-12-09 15:54:52.245121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.442 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 [2024-12-09 15:54:52.245896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 [2024-12-09 15:54:52.246941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.443 Write completed with error (sct=0, sc=8) 00:21:57.443 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 [2024-12-09 15:54:52.252173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:57.444 NVMe io qpair process completion error 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 [2024-12-09 15:54:52.253089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.444 starting I/O failed: -6 00:21:57.444 starting I/O failed: -6 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 [2024-12-09 15:54:52.254002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 starting I/O failed: -6 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.444 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 [2024-12-09 15:54:52.255073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 [2024-12-09 15:54:52.256849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:57.445 NVMe io qpair process completion error 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 [2024-12-09 15:54:52.257874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 starting I/O failed: -6 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.445 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 [2024-12-09 15:54:52.258800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 [2024-12-09 15:54:52.259797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.446 Write completed with error (sct=0, sc=8) 00:21:57.446 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 [2024-12-09 15:54:52.261363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:57.447 NVMe io qpair process completion error 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 [2024-12-09 15:54:52.262335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 [2024-12-09 15:54:52.263185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.447 Write completed with error (sct=0, sc=8) 00:21:57.447 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 [2024-12-09 15:54:52.264213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 [2024-12-09 15:54:52.270062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.448 NVMe io qpair process completion error 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 [2024-12-09 15:54:52.271029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.448 starting I/O failed: -6 00:21:57.448 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 [2024-12-09 15:54:52.271930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:57.449 starting I/O failed: -6 00:21:57.449 starting I/O failed: -6 00:21:57.449 starting I/O failed: -6 00:21:57.449 starting I/O failed: -6 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 [2024-12-09 15:54:52.273129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.449 starting I/O failed: -6 00:21:57.449 Write completed with error (sct=0, sc=8) 00:21:57.450 starting I/O failed: -6 00:21:57.450 Write completed with error (sct=0, sc=8) 00:21:57.450 starting I/O failed: -6 00:21:57.450 Write completed with error (sct=0, sc=8) 00:21:57.450 starting I/O failed: -6 00:21:57.450 Write completed with error (sct=0, sc=8) 00:21:57.450 starting I/O failed: -6 00:21:57.450 Write completed with error (sct=0, sc=8) 00:21:57.450 starting I/O failed: -6 00:21:57.450 Write completed with error (sct=0, sc=8) 00:21:57.450 starting I/O failed: -6 00:21:57.450 Write completed with error (sct=0, sc=8) 00:21:57.450 starting I/O failed: -6 00:21:57.450 Write completed with error (sct=0, sc=8) 00:21:57.450 starting I/O failed: -6 00:21:57.450 Write completed with error (sct=0, sc=8) 00:21:57.450 starting I/O failed: -6 00:21:57.450 Write completed with error (sct=0, sc=8) 00:21:57.450 starting I/O failed: -6 00:21:57.450 Write completed with error (sct=0, sc=8) 00:21:57.450 starting I/O failed: -6 00:21:57.450 Write completed with error (sct=0, sc=8) 00:21:57.450 starting I/O failed: -6 00:21:57.450 Write completed with error (sct=0, sc=8) 00:21:57.450 starting I/O failed: -6 00:21:57.450 Write completed with error (sct=0, sc=8) 00:21:57.450 starting I/O failed: -6 00:21:57.450 Write completed with error (sct=0, sc=8) 00:21:57.450 starting I/O failed: -6 00:21:57.450 Write completed with error (sct=0, sc=8) 00:21:57.450 starting I/O failed: -6 00:21:57.450 Write completed with error (sct=0, sc=8) 00:21:57.450 starting I/O failed: -6 00:21:57.450 [2024-12-09 15:54:52.276753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:57.450 NVMe io qpair process completion error 00:21:57.450 Initializing NVMe Controllers 00:21:57.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:57.450 Controller IO queue size 128, less than required. 00:21:57.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:57.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:57.450 Controller IO queue size 128, less than required. 00:21:57.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:57.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:57.450 Controller IO queue size 128, less than required. 00:21:57.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:57.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:57.450 Controller IO queue size 128, less than required. 00:21:57.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:57.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:57.450 Controller IO queue size 128, less than required. 00:21:57.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:57.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:57.450 Controller IO queue size 128, less than required. 00:21:57.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:57.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:57.450 Controller IO queue size 128, less than required. 00:21:57.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:57.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:57.450 Controller IO queue size 128, less than required. 00:21:57.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:57.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:57.450 Controller IO queue size 128, less than required. 00:21:57.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:57.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:57.450 Controller IO queue size 128, less than required. 00:21:57.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:57.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:57.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:57.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:57.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:57.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:57.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:57.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:57.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:57.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:57.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:57.450 Initialization complete. Launching workers. 00:21:57.450 ======================================================== 00:21:57.450 Latency(us) 00:21:57.450 Device Information : IOPS MiB/s Average min max 00:21:57.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2150.92 92.42 59515.30 692.69 112142.50 00:21:57.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2150.06 92.39 59556.64 865.64 114079.95 00:21:57.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2153.73 92.54 59516.56 683.35 120568.72 00:21:57.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2182.94 93.80 58052.73 901.79 103681.32 00:21:57.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2197.44 94.42 57678.27 888.01 103016.75 00:21:57.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2177.31 93.56 58220.58 652.53 101561.76 00:21:57.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2186.19 93.94 57993.66 813.77 100540.10 00:21:57.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2257.58 97.01 56176.17 873.68 101649.24 00:21:57.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2237.46 96.14 56696.00 801.80 103712.27 00:21:57.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2228.59 95.76 56976.13 628.62 109612.76 00:21:57.450 ======================================================== 00:21:57.450 Total : 21922.21 941.97 58019.62 628.62 120568.72 00:21:57.450 00:21:57.450 [2024-12-09 15:54:52.279755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9560 is same with the state(6) to be set 00:21:57.450 [2024-12-09 15:54:52.279801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6baa70 is same with the state(6) to be set 00:21:57.450 [2024-12-09 15:54:52.279831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9890 is same with the state(6) to be set 00:21:57.450 [2024-12-09 15:54:52.279859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb720 is same with the state(6) to be set 00:21:57.450 [2024-12-09 15:54:52.279889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bbae0 is same with the state(6) to be set 00:21:57.450 [2024-12-09 15:54:52.279917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bb900 is same with the state(6) to be set 00:21:57.450 [2024-12-09 15:54:52.279944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9bc0 is same with the state(6) to be set 00:21:57.450 [2024-12-09 15:54:52.279972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba740 is same with the state(6) to be set 00:21:57.450 [2024-12-09 15:54:52.279999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b9ef0 is same with the state(6) to be set 00:21:57.450 [2024-12-09 15:54:52.280027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ba410 is same with the state(6) to be set 00:21:57.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:57.450 15:54:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2059579 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2059579 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2059579 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.388 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.648 rmmod nvme_tcp 00:21:58.648 rmmod nvme_fabrics 00:21:58.648 rmmod nvme_keyring 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2059300 ']' 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2059300 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2059300 ']' 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2059300 00:21:58.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2059300) - No such process 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2059300 is not found' 00:21:58.648 Process with pid 2059300 is not found 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.648 15:54:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.553 15:54:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:00.553 00:22:00.553 real 0m10.412s 00:22:00.553 user 0m27.695s 00:22:00.553 sys 0m5.111s 00:22:00.553 15:54:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:00.553 15:54:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:00.553 ************************************ 00:22:00.553 END TEST nvmf_shutdown_tc4 00:22:00.553 ************************************ 00:22:00.811 15:54:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:00.812 00:22:00.812 real 0m41.321s 00:22:00.812 user 1m41.724s 00:22:00.812 sys 0m13.867s 00:22:00.812 15:54:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:00.812 15:54:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:00.812 ************************************ 00:22:00.812 END TEST nvmf_shutdown 00:22:00.812 ************************************ 00:22:00.812 15:54:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:00.812 15:54:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:00.812 15:54:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:00.812 15:54:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:00.812 ************************************ 00:22:00.812 START TEST nvmf_nsid 00:22:00.812 ************************************ 00:22:00.812 15:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:00.812 * Looking for test storage... 00:22:00.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:00.812 15:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:00.812 15:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:22:00.812 15:54:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:00.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.812 --rc genhtml_branch_coverage=1 00:22:00.812 --rc genhtml_function_coverage=1 00:22:00.812 --rc genhtml_legend=1 00:22:00.812 --rc geninfo_all_blocks=1 00:22:00.812 --rc geninfo_unexecuted_blocks=1 00:22:00.812 00:22:00.812 ' 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:00.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.812 --rc genhtml_branch_coverage=1 00:22:00.812 --rc genhtml_function_coverage=1 00:22:00.812 --rc genhtml_legend=1 00:22:00.812 --rc geninfo_all_blocks=1 00:22:00.812 --rc geninfo_unexecuted_blocks=1 00:22:00.812 00:22:00.812 ' 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:00.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.812 --rc genhtml_branch_coverage=1 00:22:00.812 --rc genhtml_function_coverage=1 00:22:00.812 --rc genhtml_legend=1 00:22:00.812 --rc geninfo_all_blocks=1 00:22:00.812 --rc geninfo_unexecuted_blocks=1 00:22:00.812 00:22:00.812 ' 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:00.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.812 --rc genhtml_branch_coverage=1 00:22:00.812 --rc genhtml_function_coverage=1 00:22:00.812 --rc genhtml_legend=1 00:22:00.812 --rc geninfo_all_blocks=1 00:22:00.812 --rc geninfo_unexecuted_blocks=1 00:22:00.812 00:22:00.812 ' 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.812 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:01.071 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:01.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.072 15:54:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:07.640 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:07.641 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:07.641 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:07.641 Found net devices under 0000:af:00.0: cvl_0_0 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:07.641 Found net devices under 0000:af:00.1: cvl_0_1 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:07.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:22:07.641 00:22:07.641 --- 10.0.0.2 ping statistics --- 00:22:07.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.641 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:22:07.641 00:22:07.641 --- 10.0.0.1 ping statistics --- 00:22:07.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.641 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2063992 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2063992 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2063992 ']' 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.641 15:55:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:07.641 [2024-12-09 15:55:01.965124] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:22:07.641 [2024-12-09 15:55:01.965167] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.641 [2024-12-09 15:55:02.040640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.641 [2024-12-09 15:55:02.079376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.641 [2024-12-09 15:55:02.079412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.641 [2024-12-09 15:55:02.079419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.641 [2024-12-09 15:55:02.079425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.641 [2024-12-09 15:55:02.079430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.641 [2024-12-09 15:55:02.079955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.641 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.641 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:07.641 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:07.641 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:07.641 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2064066 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=abe0688e-4206-4455-ba36-f8ad69ce3e74 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=43e8273a-389a-4649-9568-12ba7c92ef26 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=3bee3cbb-c942-4445-b717-187f140fe7d8 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:07.642 null0 00:22:07.642 null1 00:22:07.642 [2024-12-09 15:55:02.265093] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:22:07.642 [2024-12-09 15:55:02.265137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2064066 ] 00:22:07.642 null2 00:22:07.642 [2024-12-09 15:55:02.271695] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.642 [2024-12-09 15:55:02.295884] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2064066 /var/tmp/tgt2.sock 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2064066 ']' 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:07.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:07.642 [2024-12-09 15:55:02.340636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.642 [2024-12-09 15:55:02.385056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:07.642 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:07.901 [2024-12-09 15:55:02.899038] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.901 [2024-12-09 15:55:02.915128] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:07.901 nvme0n1 nvme0n2 00:22:07.901 nvme1n1 00:22:07.901 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:07.901 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:07.901 15:55:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 00:22:08.837 15:55:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:08.837 15:55:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:08.837 15:55:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:08.837 15:55:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:08.837 15:55:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:08.837 15:55:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:08.837 15:55:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:08.837 15:55:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:08.837 15:55:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:08.837 15:55:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:08.837 15:55:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:08.837 15:55:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:08.837 15:55:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:10.213 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:10.213 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:10.213 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:10.213 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:10.213 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:10.213 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid abe0688e-4206-4455-ba36-f8ad69ce3e74 00:22:10.213 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:10.213 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:10.213 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:10.213 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:10.213 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:10.213 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=abe0688e42064455ba36f8ad69ce3e74 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo ABE0688E42064455BA36F8AD69CE3E74 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ ABE0688E42064455BA36F8AD69CE3E74 == \A\B\E\0\6\8\8\E\4\2\0\6\4\4\5\5\B\A\3\6\F\8\A\D\6\9\C\E\3\E\7\4 ]] 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 43e8273a-389a-4649-9568-12ba7c92ef26 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=43e8273a389a4649956812ba7c92ef26 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 43E8273A389A4649956812BA7C92EF26 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 43E8273A389A4649956812BA7C92EF26 == \4\3\E\8\2\7\3\A\3\8\9\A\4\6\4\9\9\5\6\8\1\2\B\A\7\C\9\2\E\F\2\6 ]] 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 3bee3cbb-c942-4445-b717-187f140fe7d8 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3bee3cbbc9424445b717187f140fe7d8 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3BEE3CBBC9424445B717187F140FE7D8 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 3BEE3CBBC9424445B717187F140FE7D8 == \3\B\E\E\3\C\B\B\C\9\4\2\4\4\4\5\B\7\1\7\1\8\7\F\1\4\0\F\E\7\D\8 ]] 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:10.214 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2064066 00:22:10.473 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2064066 ']' 00:22:10.473 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2064066 00:22:10.473 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:10.473 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.473 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2064066 00:22:10.473 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:10.473 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:10.473 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2064066' 00:22:10.473 killing process with pid 2064066 00:22:10.473 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2064066 00:22:10.473 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2064066 00:22:10.732 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:10.732 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:10.733 rmmod nvme_tcp 00:22:10.733 rmmod nvme_fabrics 00:22:10.733 rmmod nvme_keyring 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2063992 ']' 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2063992 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2063992 ']' 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2063992 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2063992 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2063992' 00:22:10.733 killing process with pid 2063992 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2063992 00:22:10.733 15:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2063992 00:22:10.992 15:55:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:10.992 15:55:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:10.992 15:55:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:10.992 15:55:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:10.992 15:55:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:10.992 15:55:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:10.992 15:55:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:10.992 15:55:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:10.992 15:55:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:10.992 15:55:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.992 15:55:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.992 15:55:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.898 15:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:13.157 00:22:13.157 real 0m12.264s 00:22:13.157 user 0m9.615s 00:22:13.157 sys 0m5.408s 00:22:13.157 15:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.157 15:55:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:13.157 ************************************ 00:22:13.157 END TEST nvmf_nsid 00:22:13.157 ************************************ 00:22:13.157 15:55:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:13.157 00:22:13.157 real 12m1.126s 00:22:13.157 user 25m49.587s 00:22:13.157 sys 3m42.391s 00:22:13.157 15:55:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.157 15:55:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:13.157 ************************************ 00:22:13.157 END TEST nvmf_target_extra 00:22:13.157 ************************************ 00:22:13.157 15:55:08 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:13.157 15:55:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:13.157 15:55:08 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.157 15:55:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:13.157 ************************************ 00:22:13.157 START TEST nvmf_host 00:22:13.157 ************************************ 00:22:13.157 15:55:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:13.157 * Looking for test storage... 00:22:13.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:13.157 15:55:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:13.157 15:55:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:13.157 15:55:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:13.419 15:55:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:13.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.420 --rc genhtml_branch_coverage=1 00:22:13.420 --rc genhtml_function_coverage=1 00:22:13.420 --rc genhtml_legend=1 00:22:13.420 --rc geninfo_all_blocks=1 00:22:13.420 --rc geninfo_unexecuted_blocks=1 00:22:13.420 00:22:13.420 ' 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:13.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.420 --rc genhtml_branch_coverage=1 00:22:13.420 --rc genhtml_function_coverage=1 00:22:13.420 --rc genhtml_legend=1 00:22:13.420 --rc geninfo_all_blocks=1 00:22:13.420 --rc geninfo_unexecuted_blocks=1 00:22:13.420 00:22:13.420 ' 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:13.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.420 --rc genhtml_branch_coverage=1 00:22:13.420 --rc genhtml_function_coverage=1 00:22:13.420 --rc genhtml_legend=1 00:22:13.420 --rc geninfo_all_blocks=1 00:22:13.420 --rc geninfo_unexecuted_blocks=1 00:22:13.420 00:22:13.420 ' 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:13.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.420 --rc genhtml_branch_coverage=1 00:22:13.420 --rc genhtml_function_coverage=1 00:22:13.420 --rc genhtml_legend=1 00:22:13.420 --rc geninfo_all_blocks=1 00:22:13.420 --rc geninfo_unexecuted_blocks=1 00:22:13.420 00:22:13.420 ' 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:13.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.420 ************************************ 00:22:13.420 START TEST nvmf_multicontroller 00:22:13.420 ************************************ 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:13.420 * Looking for test storage... 00:22:13.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:22:13.420 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:13.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.682 --rc genhtml_branch_coverage=1 00:22:13.682 --rc genhtml_function_coverage=1 00:22:13.682 --rc genhtml_legend=1 00:22:13.682 --rc geninfo_all_blocks=1 00:22:13.682 --rc geninfo_unexecuted_blocks=1 00:22:13.682 00:22:13.682 ' 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:13.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.682 --rc genhtml_branch_coverage=1 00:22:13.682 --rc genhtml_function_coverage=1 00:22:13.682 --rc genhtml_legend=1 00:22:13.682 --rc geninfo_all_blocks=1 00:22:13.682 --rc geninfo_unexecuted_blocks=1 00:22:13.682 00:22:13.682 ' 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:13.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.682 --rc genhtml_branch_coverage=1 00:22:13.682 --rc genhtml_function_coverage=1 00:22:13.682 --rc genhtml_legend=1 00:22:13.682 --rc geninfo_all_blocks=1 00:22:13.682 --rc geninfo_unexecuted_blocks=1 00:22:13.682 00:22:13.682 ' 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:13.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.682 --rc genhtml_branch_coverage=1 00:22:13.682 --rc genhtml_function_coverage=1 00:22:13.682 --rc genhtml_legend=1 00:22:13.682 --rc geninfo_all_blocks=1 00:22:13.682 --rc geninfo_unexecuted_blocks=1 00:22:13.682 00:22:13.682 ' 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.682 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:13.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:13.683 15:55:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:20.254 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:20.254 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:20.254 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:20.255 Found net devices under 0000:af:00.0: cvl_0_0 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:20.255 Found net devices under 0000:af:00.1: cvl_0_1 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:20.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:20.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:22:20.255 00:22:20.255 --- 10.0.0.2 ping statistics --- 00:22:20.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.255 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:20.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:20.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:22:20.255 00:22:20.255 --- 10.0.0.1 ping statistics --- 00:22:20.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.255 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2068278 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2068278 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2068278 ']' 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.255 [2024-12-09 15:55:14.692381] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:22:20.255 [2024-12-09 15:55:14.692429] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.255 [2024-12-09 15:55:14.771323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:20.255 [2024-12-09 15:55:14.810327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.255 [2024-12-09 15:55:14.810362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.255 [2024-12-09 15:55:14.810370] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.255 [2024-12-09 15:55:14.810376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.255 [2024-12-09 15:55:14.810381] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.255 [2024-12-09 15:55:14.811778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.255 [2024-12-09 15:55:14.811887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.255 [2024-12-09 15:55:14.811888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.255 [2024-12-09 15:55:14.960565] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.255 15:55:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.255 Malloc0 00:22:20.255 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.255 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:20.255 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.255 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.255 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.255 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:20.255 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.255 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.255 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.255 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.256 [2024-12-09 15:55:15.025945] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.256 [2024-12-09 15:55:15.033874] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.256 Malloc1 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2068397 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2068397 /var/tmp/bdevperf.sock 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2068397 ']' 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:20.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.256 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.515 NVMe0n1 00:22:20.515 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.515 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:20.515 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:20.515 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.515 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.515 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.515 1 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.516 request: 00:22:20.516 { 00:22:20.516 "name": "NVMe0", 00:22:20.516 "trtype": "tcp", 00:22:20.516 "traddr": "10.0.0.2", 00:22:20.516 "adrfam": "ipv4", 00:22:20.516 "trsvcid": "4420", 00:22:20.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.516 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:20.516 "hostaddr": "10.0.0.1", 00:22:20.516 "prchk_reftag": false, 00:22:20.516 "prchk_guard": false, 00:22:20.516 "hdgst": false, 00:22:20.516 "ddgst": false, 00:22:20.516 "allow_unrecognized_csi": false, 00:22:20.516 "method": "bdev_nvme_attach_controller", 00:22:20.516 "req_id": 1 00:22:20.516 } 00:22:20.516 Got JSON-RPC error response 00:22:20.516 response: 00:22:20.516 { 00:22:20.516 "code": -114, 00:22:20.516 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:20.516 } 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.516 request: 00:22:20.516 { 00:22:20.516 "name": "NVMe0", 00:22:20.516 "trtype": "tcp", 00:22:20.516 "traddr": "10.0.0.2", 00:22:20.516 "adrfam": "ipv4", 00:22:20.516 "trsvcid": "4420", 00:22:20.516 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:20.516 "hostaddr": "10.0.0.1", 00:22:20.516 "prchk_reftag": false, 00:22:20.516 "prchk_guard": false, 00:22:20.516 "hdgst": false, 00:22:20.516 "ddgst": false, 00:22:20.516 "allow_unrecognized_csi": false, 00:22:20.516 "method": "bdev_nvme_attach_controller", 00:22:20.516 "req_id": 1 00:22:20.516 } 00:22:20.516 Got JSON-RPC error response 00:22:20.516 response: 00:22:20.516 { 00:22:20.516 "code": -114, 00:22:20.516 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:20.516 } 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.516 request: 00:22:20.516 { 00:22:20.516 "name": "NVMe0", 00:22:20.516 "trtype": "tcp", 00:22:20.516 "traddr": "10.0.0.2", 00:22:20.516 "adrfam": "ipv4", 00:22:20.516 "trsvcid": "4420", 00:22:20.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.516 "hostaddr": "10.0.0.1", 00:22:20.516 "prchk_reftag": false, 00:22:20.516 "prchk_guard": false, 00:22:20.516 "hdgst": false, 00:22:20.516 "ddgst": false, 00:22:20.516 "multipath": "disable", 00:22:20.516 "allow_unrecognized_csi": false, 00:22:20.516 "method": "bdev_nvme_attach_controller", 00:22:20.516 "req_id": 1 00:22:20.516 } 00:22:20.516 Got JSON-RPC error response 00:22:20.516 response: 00:22:20.516 { 00:22:20.516 "code": -114, 00:22:20.516 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:20.516 } 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.516 request: 00:22:20.516 { 00:22:20.516 "name": "NVMe0", 00:22:20.516 "trtype": "tcp", 00:22:20.516 "traddr": "10.0.0.2", 00:22:20.516 "adrfam": "ipv4", 00:22:20.516 "trsvcid": "4420", 00:22:20.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.516 "hostaddr": "10.0.0.1", 00:22:20.516 "prchk_reftag": false, 00:22:20.516 "prchk_guard": false, 00:22:20.516 "hdgst": false, 00:22:20.516 "ddgst": false, 00:22:20.516 "multipath": "failover", 00:22:20.516 "allow_unrecognized_csi": false, 00:22:20.516 "method": "bdev_nvme_attach_controller", 00:22:20.516 "req_id": 1 00:22:20.516 } 00:22:20.516 Got JSON-RPC error response 00:22:20.516 response: 00:22:20.516 { 00:22:20.516 "code": -114, 00:22:20.516 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:20.516 } 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:20.516 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:20.517 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:20.517 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.517 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.775 NVMe0n1 00:22:20.775 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.775 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:20.775 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.775 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.775 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.775 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:20.775 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.775 15:55:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.034 00:22:21.034 15:55:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.034 15:55:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:21.034 15:55:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:21.034 15:55:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.034 15:55:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.034 15:55:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.034 15:55:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:21.034 15:55:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:21.971 { 00:22:21.971 "results": [ 00:22:21.971 { 00:22:21.971 "job": "NVMe0n1", 00:22:21.971 "core_mask": "0x1", 00:22:21.971 "workload": "write", 00:22:21.971 "status": "finished", 00:22:21.971 "queue_depth": 128, 00:22:21.971 "io_size": 4096, 00:22:21.971 "runtime": 1.007801, 00:22:21.971 "iops": 25272.846524264216, 00:22:21.971 "mibps": 98.7220567354071, 00:22:21.971 "io_failed": 0, 00:22:21.971 "io_timeout": 0, 00:22:21.971 "avg_latency_us": 5058.338149606447, 00:22:21.971 "min_latency_us": 1568.182857142857, 00:22:21.971 "max_latency_us": 9674.361904761905 00:22:21.971 } 00:22:21.971 ], 00:22:21.971 "core_count": 1 00:22:21.971 } 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2068397 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2068397 ']' 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2068397 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2068397 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2068397' 00:22:22.230 killing process with pid 2068397 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2068397 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2068397 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.230 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:22.490 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:22.490 [2024-12-09 15:55:15.142652] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:22:22.490 [2024-12-09 15:55:15.142707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2068397 ] 00:22:22.490 [2024-12-09 15:55:15.216504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.490 [2024-12-09 15:55:15.257231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.490 [2024-12-09 15:55:16.075734] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 0ce51cfd-ab28-4fab-badc-3fb62c6a3306 already exists 00:22:22.490 [2024-12-09 15:55:16.075762] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:0ce51cfd-ab28-4fab-badc-3fb62c6a3306 alias for bdev NVMe1n1 00:22:22.490 [2024-12-09 15:55:16.075770] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:22.490 Running I/O for 1 seconds... 00:22:22.490 25215.00 IOPS, 98.50 MiB/s 00:22:22.490 Latency(us) 00:22:22.490 [2024-12-09T14:55:17.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.490 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:22.490 NVMe0n1 : 1.01 25272.85 98.72 0.00 0.00 5058.34 1568.18 9674.36 00:22:22.490 [2024-12-09T14:55:17.718Z] =================================================================================================================== 00:22:22.490 [2024-12-09T14:55:17.718Z] Total : 25272.85 98.72 0.00 0.00 5058.34 1568.18 9674.36 00:22:22.490 Received shutdown signal, test time was about 1.000000 seconds 00:22:22.490 00:22:22.490 Latency(us) 00:22:22.490 [2024-12-09T14:55:17.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.490 [2024-12-09T14:55:17.718Z] =================================================================================================================== 00:22:22.490 [2024-12-09T14:55:17.718Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:22.490 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:22.490 rmmod nvme_tcp 00:22:22.490 rmmod nvme_fabrics 00:22:22.490 rmmod nvme_keyring 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2068278 ']' 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2068278 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2068278 ']' 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2068278 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2068278 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2068278' 00:22:22.490 killing process with pid 2068278 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2068278 00:22:22.490 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2068278 00:22:22.749 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:22.749 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:22.749 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:22.749 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:22.749 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:22.749 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:22.749 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:22.749 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:22.749 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:22.749 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.749 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.749 15:55:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.655 15:55:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:24.655 00:22:24.655 real 0m11.370s 00:22:24.655 user 0m13.049s 00:22:24.655 sys 0m5.220s 00:22:24.655 15:55:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.655 15:55:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:24.655 ************************************ 00:22:24.655 END TEST nvmf_multicontroller 00:22:24.655 ************************************ 00:22:24.914 15:55:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:24.914 15:55:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:24.915 15:55:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.915 15:55:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.915 ************************************ 00:22:24.915 START TEST nvmf_aer 00:22:24.915 ************************************ 00:22:24.915 15:55:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:24.915 * Looking for test storage... 00:22:24.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:24.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.915 --rc genhtml_branch_coverage=1 00:22:24.915 --rc genhtml_function_coverage=1 00:22:24.915 --rc genhtml_legend=1 00:22:24.915 --rc geninfo_all_blocks=1 00:22:24.915 --rc geninfo_unexecuted_blocks=1 00:22:24.915 00:22:24.915 ' 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:24.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.915 --rc genhtml_branch_coverage=1 00:22:24.915 --rc genhtml_function_coverage=1 00:22:24.915 --rc genhtml_legend=1 00:22:24.915 --rc geninfo_all_blocks=1 00:22:24.915 --rc geninfo_unexecuted_blocks=1 00:22:24.915 00:22:24.915 ' 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:24.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.915 --rc genhtml_branch_coverage=1 00:22:24.915 --rc genhtml_function_coverage=1 00:22:24.915 --rc genhtml_legend=1 00:22:24.915 --rc geninfo_all_blocks=1 00:22:24.915 --rc geninfo_unexecuted_blocks=1 00:22:24.915 00:22:24.915 ' 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:24.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.915 --rc genhtml_branch_coverage=1 00:22:24.915 --rc genhtml_function_coverage=1 00:22:24.915 --rc genhtml_legend=1 00:22:24.915 --rc geninfo_all_blocks=1 00:22:24.915 --rc geninfo_unexecuted_blocks=1 00:22:24.915 00:22:24.915 ' 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:24.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.915 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.916 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.175 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:25.175 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:25.175 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:25.175 15:55:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:30.447 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:30.707 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:30.707 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:30.707 Found net devices under 0000:af:00.0: cvl_0_0 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:30.707 Found net devices under 0000:af:00.1: cvl_0_1 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:30.707 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:30.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:22:30.708 00:22:30.708 --- 10.0.0.2 ping statistics --- 00:22:30.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.708 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:22:30.708 00:22:30.708 --- 10.0.0.1 ping statistics --- 00:22:30.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.708 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:30.708 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:30.967 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:30.967 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:30.967 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.967 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:30.967 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2072255 00:22:30.967 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:30.967 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2072255 00:22:30.967 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2072255 ']' 00:22:30.967 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.967 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.967 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.967 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.967 15:55:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:30.967 [2024-12-09 15:55:26.008223] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:22:30.967 [2024-12-09 15:55:26.008267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.967 [2024-12-09 15:55:26.085397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:30.967 [2024-12-09 15:55:26.124549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.967 [2024-12-09 15:55:26.124588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.967 [2024-12-09 15:55:26.124594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.967 [2024-12-09 15:55:26.124600] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.967 [2024-12-09 15:55:26.124605] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.967 [2024-12-09 15:55:26.126158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.967 [2024-12-09 15:55:26.126313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.967 [2024-12-09 15:55:26.126269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.967 [2024-12-09 15:55:26.126315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.227 [2024-12-09 15:55:26.275710] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.227 Malloc0 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.227 [2024-12-09 15:55:26.335361] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.227 [ 00:22:31.227 { 00:22:31.227 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:31.227 "subtype": "Discovery", 00:22:31.227 "listen_addresses": [], 00:22:31.227 "allow_any_host": true, 00:22:31.227 "hosts": [] 00:22:31.227 }, 00:22:31.227 { 00:22:31.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.227 "subtype": "NVMe", 00:22:31.227 "listen_addresses": [ 00:22:31.227 { 00:22:31.227 "trtype": "TCP", 00:22:31.227 "adrfam": "IPv4", 00:22:31.227 "traddr": "10.0.0.2", 00:22:31.227 "trsvcid": "4420" 00:22:31.227 } 00:22:31.227 ], 00:22:31.227 "allow_any_host": true, 00:22:31.227 "hosts": [], 00:22:31.227 "serial_number": "SPDK00000000000001", 00:22:31.227 "model_number": "SPDK bdev Controller", 00:22:31.227 "max_namespaces": 2, 00:22:31.227 "min_cntlid": 1, 00:22:31.227 "max_cntlid": 65519, 00:22:31.227 "namespaces": [ 00:22:31.227 { 00:22:31.227 "nsid": 1, 00:22:31.227 "bdev_name": "Malloc0", 00:22:31.227 "name": "Malloc0", 00:22:31.227 "nguid": "66B0C333271A41A88C1922F8BC10D94F", 00:22:31.227 "uuid": "66b0c333-271a-41a8-8c19-22f8bc10d94f" 00:22:31.227 } 00:22:31.227 ] 00:22:31.227 } 00:22:31.227 ] 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2072320 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:31.227 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.487 Malloc1 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.487 Asynchronous Event Request test 00:22:31.487 Attaching to 10.0.0.2 00:22:31.487 Attached to 10.0.0.2 00:22:31.487 Registering asynchronous event callbacks... 00:22:31.487 Starting namespace attribute notice tests for all controllers... 00:22:31.487 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:31.487 aer_cb - Changed Namespace 00:22:31.487 Cleaning up... 00:22:31.487 [ 00:22:31.487 { 00:22:31.487 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:31.487 "subtype": "Discovery", 00:22:31.487 "listen_addresses": [], 00:22:31.487 "allow_any_host": true, 00:22:31.487 "hosts": [] 00:22:31.487 }, 00:22:31.487 { 00:22:31.487 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.487 "subtype": "NVMe", 00:22:31.487 "listen_addresses": [ 00:22:31.487 { 00:22:31.487 "trtype": "TCP", 00:22:31.487 "adrfam": "IPv4", 00:22:31.487 "traddr": "10.0.0.2", 00:22:31.487 "trsvcid": "4420" 00:22:31.487 } 00:22:31.487 ], 00:22:31.487 "allow_any_host": true, 00:22:31.487 "hosts": [], 00:22:31.487 "serial_number": "SPDK00000000000001", 00:22:31.487 "model_number": "SPDK bdev Controller", 00:22:31.487 "max_namespaces": 2, 00:22:31.487 "min_cntlid": 1, 00:22:31.487 "max_cntlid": 65519, 00:22:31.487 "namespaces": [ 00:22:31.487 { 00:22:31.487 "nsid": 1, 00:22:31.487 "bdev_name": "Malloc0", 00:22:31.487 "name": "Malloc0", 00:22:31.487 "nguid": "66B0C333271A41A88C1922F8BC10D94F", 00:22:31.487 "uuid": "66b0c333-271a-41a8-8c19-22f8bc10d94f" 00:22:31.487 }, 00:22:31.487 { 00:22:31.487 "nsid": 2, 00:22:31.487 "bdev_name": "Malloc1", 00:22:31.487 "name": "Malloc1", 00:22:31.487 "nguid": "336414243E104D9BB790FDB3FFDB9BA8", 00:22:31.487 "uuid": "33641424-3e10-4d9b-b790-fdb3ffdb9ba8" 00:22:31.487 } 00:22:31.487 ] 00:22:31.487 } 00:22:31.487 ] 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2072320 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:31.487 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:31.487 rmmod nvme_tcp 00:22:31.746 rmmod nvme_fabrics 00:22:31.746 rmmod nvme_keyring 00:22:31.746 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:31.746 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:31.746 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:31.746 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2072255 ']' 00:22:31.746 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2072255 00:22:31.746 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2072255 ']' 00:22:31.746 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2072255 00:22:31.746 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:31.746 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.746 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2072255 00:22:31.747 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:31.747 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:31.747 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2072255' 00:22:31.747 killing process with pid 2072255 00:22:31.747 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2072255 00:22:31.747 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2072255 00:22:31.747 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:31.747 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:31.747 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:31.747 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:31.747 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:31.747 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:31.747 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:31.747 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:31.747 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:31.747 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.747 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.006 15:55:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.914 15:55:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:33.914 00:22:33.914 real 0m9.102s 00:22:33.914 user 0m5.058s 00:22:33.914 sys 0m4.807s 00:22:33.914 15:55:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:33.914 15:55:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:33.914 ************************************ 00:22:33.914 END TEST nvmf_aer 00:22:33.914 ************************************ 00:22:33.914 15:55:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:33.914 15:55:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:33.914 15:55:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.914 15:55:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.914 ************************************ 00:22:33.914 START TEST nvmf_async_init 00:22:33.914 ************************************ 00:22:33.914 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:34.174 * Looking for test storage... 00:22:34.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:34.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.175 --rc genhtml_branch_coverage=1 00:22:34.175 --rc genhtml_function_coverage=1 00:22:34.175 --rc genhtml_legend=1 00:22:34.175 --rc geninfo_all_blocks=1 00:22:34.175 --rc geninfo_unexecuted_blocks=1 00:22:34.175 00:22:34.175 ' 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:34.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.175 --rc genhtml_branch_coverage=1 00:22:34.175 --rc genhtml_function_coverage=1 00:22:34.175 --rc genhtml_legend=1 00:22:34.175 --rc geninfo_all_blocks=1 00:22:34.175 --rc geninfo_unexecuted_blocks=1 00:22:34.175 00:22:34.175 ' 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:34.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.175 --rc genhtml_branch_coverage=1 00:22:34.175 --rc genhtml_function_coverage=1 00:22:34.175 --rc genhtml_legend=1 00:22:34.175 --rc geninfo_all_blocks=1 00:22:34.175 --rc geninfo_unexecuted_blocks=1 00:22:34.175 00:22:34.175 ' 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:34.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.175 --rc genhtml_branch_coverage=1 00:22:34.175 --rc genhtml_function_coverage=1 00:22:34.175 --rc genhtml_legend=1 00:22:34.175 --rc geninfo_all_blocks=1 00:22:34.175 --rc geninfo_unexecuted_blocks=1 00:22:34.175 00:22:34.175 ' 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.175 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:34.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0c343106064440fab56d0dc34f584231 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:34.176 15:55:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.749 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:40.750 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:40.750 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:40.750 Found net devices under 0000:af:00.0: cvl_0_0 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:40.750 Found net devices under 0000:af:00.1: cvl_0_1 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.750 15:55:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.750 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.750 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.750 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.750 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.750 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.750 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.750 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.750 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.750 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:22:40.750 00:22:40.750 --- 10.0.0.2 ping statistics --- 00:22:40.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.750 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:22:40.750 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:22:40.750 00:22:40.750 --- 10.0.0.1 ping statistics --- 00:22:40.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.750 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:22:40.750 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.750 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:40.750 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2075995 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2075995 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2075995 ']' 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.751 [2024-12-09 15:55:35.359947] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:22:40.751 [2024-12-09 15:55:35.359993] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.751 [2024-12-09 15:55:35.435058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.751 [2024-12-09 15:55:35.472180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.751 [2024-12-09 15:55:35.472212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.751 [2024-12-09 15:55:35.472222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.751 [2024-12-09 15:55:35.472228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.751 [2024-12-09 15:55:35.472249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.751 [2024-12-09 15:55:35.472771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.751 [2024-12-09 15:55:35.620061] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.751 null0 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0c343106064440fab56d0dc34f584231 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.751 [2024-12-09 15:55:35.672319] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.751 nvme0n1 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.751 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.751 [ 00:22:40.751 { 00:22:40.751 "name": "nvme0n1", 00:22:40.751 "aliases": [ 00:22:40.751 "0c343106-0644-40fa-b56d-0dc34f584231" 00:22:40.751 ], 00:22:40.751 "product_name": "NVMe disk", 00:22:40.751 "block_size": 512, 00:22:40.751 "num_blocks": 2097152, 00:22:40.751 "uuid": "0c343106-0644-40fa-b56d-0dc34f584231", 00:22:40.751 "numa_id": 1, 00:22:40.751 "assigned_rate_limits": { 00:22:40.751 "rw_ios_per_sec": 0, 00:22:40.751 "rw_mbytes_per_sec": 0, 00:22:40.751 "r_mbytes_per_sec": 0, 00:22:40.751 "w_mbytes_per_sec": 0 00:22:40.751 }, 00:22:40.751 "claimed": false, 00:22:40.751 "zoned": false, 00:22:40.751 "supported_io_types": { 00:22:40.751 "read": true, 00:22:40.751 "write": true, 00:22:40.751 "unmap": false, 00:22:40.751 "flush": true, 00:22:40.751 "reset": true, 00:22:40.751 "nvme_admin": true, 00:22:40.751 "nvme_io": true, 00:22:40.751 "nvme_io_md": false, 00:22:40.751 "write_zeroes": true, 00:22:40.751 "zcopy": false, 00:22:40.751 "get_zone_info": false, 00:22:40.751 "zone_management": false, 00:22:40.751 "zone_append": false, 00:22:40.751 "compare": true, 00:22:40.751 "compare_and_write": true, 00:22:40.751 "abort": true, 00:22:40.751 "seek_hole": false, 00:22:40.751 "seek_data": false, 00:22:40.751 "copy": true, 00:22:40.751 "nvme_iov_md": false 00:22:40.751 }, 00:22:40.751 "memory_domains": [ 00:22:40.751 { 00:22:40.751 "dma_device_id": "system", 00:22:40.751 "dma_device_type": 1 00:22:40.751 } 00:22:40.751 ], 00:22:40.751 "driver_specific": { 00:22:40.751 "nvme": [ 00:22:40.751 { 00:22:40.751 "trid": { 00:22:40.751 "trtype": "TCP", 00:22:40.751 "adrfam": "IPv4", 00:22:40.751 "traddr": "10.0.0.2", 00:22:40.751 "trsvcid": "4420", 00:22:40.751 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:40.751 }, 00:22:40.751 "ctrlr_data": { 00:22:40.751 "cntlid": 1, 00:22:40.751 "vendor_id": "0x8086", 00:22:40.751 "model_number": "SPDK bdev Controller", 00:22:40.751 "serial_number": "00000000000000000000", 00:22:40.751 "firmware_revision": "25.01", 00:22:40.752 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:40.752 "oacs": { 00:22:40.752 "security": 0, 00:22:40.752 "format": 0, 00:22:40.752 "firmware": 0, 00:22:40.752 "ns_manage": 0 00:22:40.752 }, 00:22:40.752 "multi_ctrlr": true, 00:22:40.752 "ana_reporting": false 00:22:40.752 }, 00:22:40.752 "vs": { 00:22:40.752 "nvme_version": "1.3" 00:22:40.752 }, 00:22:40.752 "ns_data": { 00:22:40.752 "id": 1, 00:22:40.752 "can_share": true 00:22:40.752 } 00:22:40.752 } 00:22:40.752 ], 00:22:40.752 "mp_policy": "active_passive" 00:22:40.752 } 00:22:40.752 } 00:22:40.752 ] 00:22:40.752 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.752 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:40.752 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.752 15:55:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.752 [2024-12-09 15:55:35.936859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:40.752 [2024-12-09 15:55:35.936911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9550 (9): Bad file descriptor 00:22:41.011 [2024-12-09 15:55:36.069290] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:41.011 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.011 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:41.011 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.011 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.011 [ 00:22:41.011 { 00:22:41.011 "name": "nvme0n1", 00:22:41.011 "aliases": [ 00:22:41.011 "0c343106-0644-40fa-b56d-0dc34f584231" 00:22:41.011 ], 00:22:41.011 "product_name": "NVMe disk", 00:22:41.011 "block_size": 512, 00:22:41.011 "num_blocks": 2097152, 00:22:41.011 "uuid": "0c343106-0644-40fa-b56d-0dc34f584231", 00:22:41.011 "numa_id": 1, 00:22:41.011 "assigned_rate_limits": { 00:22:41.011 "rw_ios_per_sec": 0, 00:22:41.011 "rw_mbytes_per_sec": 0, 00:22:41.011 "r_mbytes_per_sec": 0, 00:22:41.011 "w_mbytes_per_sec": 0 00:22:41.011 }, 00:22:41.011 "claimed": false, 00:22:41.011 "zoned": false, 00:22:41.011 "supported_io_types": { 00:22:41.012 "read": true, 00:22:41.012 "write": true, 00:22:41.012 "unmap": false, 00:22:41.012 "flush": true, 00:22:41.012 "reset": true, 00:22:41.012 "nvme_admin": true, 00:22:41.012 "nvme_io": true, 00:22:41.012 "nvme_io_md": false, 00:22:41.012 "write_zeroes": true, 00:22:41.012 "zcopy": false, 00:22:41.012 "get_zone_info": false, 00:22:41.012 "zone_management": false, 00:22:41.012 "zone_append": false, 00:22:41.012 "compare": true, 00:22:41.012 "compare_and_write": true, 00:22:41.012 "abort": true, 00:22:41.012 "seek_hole": false, 00:22:41.012 "seek_data": false, 00:22:41.012 "copy": true, 00:22:41.012 "nvme_iov_md": false 00:22:41.012 }, 00:22:41.012 "memory_domains": [ 00:22:41.012 { 00:22:41.012 "dma_device_id": "system", 00:22:41.012 "dma_device_type": 1 00:22:41.012 } 00:22:41.012 ], 00:22:41.012 "driver_specific": { 00:22:41.012 "nvme": [ 00:22:41.012 { 00:22:41.012 "trid": { 00:22:41.012 "trtype": "TCP", 00:22:41.012 "adrfam": "IPv4", 00:22:41.012 "traddr": "10.0.0.2", 00:22:41.012 "trsvcid": "4420", 00:22:41.012 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:41.012 }, 00:22:41.012 "ctrlr_data": { 00:22:41.012 "cntlid": 2, 00:22:41.012 "vendor_id": "0x8086", 00:22:41.012 "model_number": "SPDK bdev Controller", 00:22:41.012 "serial_number": "00000000000000000000", 00:22:41.012 "firmware_revision": "25.01", 00:22:41.012 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:41.012 "oacs": { 00:22:41.012 "security": 0, 00:22:41.012 "format": 0, 00:22:41.012 "firmware": 0, 00:22:41.012 "ns_manage": 0 00:22:41.012 }, 00:22:41.012 "multi_ctrlr": true, 00:22:41.012 "ana_reporting": false 00:22:41.012 }, 00:22:41.012 "vs": { 00:22:41.012 "nvme_version": "1.3" 00:22:41.012 }, 00:22:41.012 "ns_data": { 00:22:41.012 "id": 1, 00:22:41.012 "can_share": true 00:22:41.012 } 00:22:41.012 } 00:22:41.012 ], 00:22:41.012 "mp_policy": "active_passive" 00:22:41.012 } 00:22:41.012 } 00:22:41.012 ] 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.CXJE4Wztx5 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.CXJE4Wztx5 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.CXJE4Wztx5 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.012 [2024-12-09 15:55:36.145486] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:41.012 [2024-12-09 15:55:36.145592] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.012 [2024-12-09 15:55:36.165555] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:41.012 nvme0n1 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.012 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.272 [ 00:22:41.272 { 00:22:41.272 "name": "nvme0n1", 00:22:41.272 "aliases": [ 00:22:41.272 "0c343106-0644-40fa-b56d-0dc34f584231" 00:22:41.272 ], 00:22:41.272 "product_name": "NVMe disk", 00:22:41.272 "block_size": 512, 00:22:41.272 "num_blocks": 2097152, 00:22:41.272 "uuid": "0c343106-0644-40fa-b56d-0dc34f584231", 00:22:41.272 "numa_id": 1, 00:22:41.272 "assigned_rate_limits": { 00:22:41.272 "rw_ios_per_sec": 0, 00:22:41.272 "rw_mbytes_per_sec": 0, 00:22:41.272 "r_mbytes_per_sec": 0, 00:22:41.272 "w_mbytes_per_sec": 0 00:22:41.272 }, 00:22:41.272 "claimed": false, 00:22:41.272 "zoned": false, 00:22:41.272 "supported_io_types": { 00:22:41.272 "read": true, 00:22:41.272 "write": true, 00:22:41.272 "unmap": false, 00:22:41.272 "flush": true, 00:22:41.272 "reset": true, 00:22:41.272 "nvme_admin": true, 00:22:41.272 "nvme_io": true, 00:22:41.272 "nvme_io_md": false, 00:22:41.272 "write_zeroes": true, 00:22:41.272 "zcopy": false, 00:22:41.272 "get_zone_info": false, 00:22:41.272 "zone_management": false, 00:22:41.272 "zone_append": false, 00:22:41.272 "compare": true, 00:22:41.272 "compare_and_write": true, 00:22:41.272 "abort": true, 00:22:41.272 "seek_hole": false, 00:22:41.272 "seek_data": false, 00:22:41.272 "copy": true, 00:22:41.272 "nvme_iov_md": false 00:22:41.272 }, 00:22:41.272 "memory_domains": [ 00:22:41.272 { 00:22:41.272 "dma_device_id": "system", 00:22:41.272 "dma_device_type": 1 00:22:41.272 } 00:22:41.272 ], 00:22:41.272 "driver_specific": { 00:22:41.272 "nvme": [ 00:22:41.272 { 00:22:41.272 "trid": { 00:22:41.272 "trtype": "TCP", 00:22:41.272 "adrfam": "IPv4", 00:22:41.272 "traddr": "10.0.0.2", 00:22:41.272 "trsvcid": "4421", 00:22:41.272 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:41.272 }, 00:22:41.272 "ctrlr_data": { 00:22:41.272 "cntlid": 3, 00:22:41.272 "vendor_id": "0x8086", 00:22:41.272 "model_number": "SPDK bdev Controller", 00:22:41.272 "serial_number": "00000000000000000000", 00:22:41.272 "firmware_revision": "25.01", 00:22:41.272 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:41.272 "oacs": { 00:22:41.272 "security": 0, 00:22:41.272 "format": 0, 00:22:41.272 "firmware": 0, 00:22:41.272 "ns_manage": 0 00:22:41.272 }, 00:22:41.272 "multi_ctrlr": true, 00:22:41.272 "ana_reporting": false 00:22:41.272 }, 00:22:41.272 "vs": { 00:22:41.272 "nvme_version": "1.3" 00:22:41.272 }, 00:22:41.272 "ns_data": { 00:22:41.272 "id": 1, 00:22:41.272 "can_share": true 00:22:41.272 } 00:22:41.272 } 00:22:41.272 ], 00:22:41.272 "mp_policy": "active_passive" 00:22:41.272 } 00:22:41.272 } 00:22:41.272 ] 00:22:41.272 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.272 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.272 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.272 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.272 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.272 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.CXJE4Wztx5 00:22:41.272 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:41.272 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:41.272 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:41.272 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:41.273 rmmod nvme_tcp 00:22:41.273 rmmod nvme_fabrics 00:22:41.273 rmmod nvme_keyring 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2075995 ']' 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2075995 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2075995 ']' 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2075995 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2075995 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2075995' 00:22:41.273 killing process with pid 2075995 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2075995 00:22:41.273 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2075995 00:22:41.532 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:41.532 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:41.532 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:41.532 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:41.532 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:41.532 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:41.532 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:41.532 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.532 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:41.532 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.532 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.532 15:55:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.440 15:55:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:43.440 00:22:43.440 real 0m9.491s 00:22:43.440 user 0m3.089s 00:22:43.440 sys 0m4.791s 00:22:43.440 15:55:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.440 15:55:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:43.440 ************************************ 00:22:43.440 END TEST nvmf_async_init 00:22:43.440 ************************************ 00:22:43.440 15:55:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:43.440 15:55:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:43.440 15:55:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.440 15:55:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.705 ************************************ 00:22:43.705 START TEST dma 00:22:43.705 ************************************ 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:43.705 * Looking for test storage... 00:22:43.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.705 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:43.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.706 --rc genhtml_branch_coverage=1 00:22:43.706 --rc genhtml_function_coverage=1 00:22:43.706 --rc genhtml_legend=1 00:22:43.706 --rc geninfo_all_blocks=1 00:22:43.706 --rc geninfo_unexecuted_blocks=1 00:22:43.706 00:22:43.706 ' 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:43.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.706 --rc genhtml_branch_coverage=1 00:22:43.706 --rc genhtml_function_coverage=1 00:22:43.706 --rc genhtml_legend=1 00:22:43.706 --rc geninfo_all_blocks=1 00:22:43.706 --rc geninfo_unexecuted_blocks=1 00:22:43.706 00:22:43.706 ' 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:43.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.706 --rc genhtml_branch_coverage=1 00:22:43.706 --rc genhtml_function_coverage=1 00:22:43.706 --rc genhtml_legend=1 00:22:43.706 --rc geninfo_all_blocks=1 00:22:43.706 --rc geninfo_unexecuted_blocks=1 00:22:43.706 00:22:43.706 ' 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:43.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.706 --rc genhtml_branch_coverage=1 00:22:43.706 --rc genhtml_function_coverage=1 00:22:43.706 --rc genhtml_legend=1 00:22:43.706 --rc geninfo_all_blocks=1 00:22:43.706 --rc geninfo_unexecuted_blocks=1 00:22:43.706 00:22:43.706 ' 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:43.706 00:22:43.706 real 0m0.207s 00:22:43.706 user 0m0.129s 00:22:43.706 sys 0m0.091s 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:43.706 ************************************ 00:22:43.706 END TEST dma 00:22:43.706 ************************************ 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.706 15:55:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.057 ************************************ 00:22:44.057 START TEST nvmf_identify 00:22:44.057 ************************************ 00:22:44.057 15:55:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:44.057 * Looking for test storage... 00:22:44.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:44.057 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:44.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.057 --rc genhtml_branch_coverage=1 00:22:44.057 --rc genhtml_function_coverage=1 00:22:44.057 --rc genhtml_legend=1 00:22:44.057 --rc geninfo_all_blocks=1 00:22:44.058 --rc geninfo_unexecuted_blocks=1 00:22:44.058 00:22:44.058 ' 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:44.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.058 --rc genhtml_branch_coverage=1 00:22:44.058 --rc genhtml_function_coverage=1 00:22:44.058 --rc genhtml_legend=1 00:22:44.058 --rc geninfo_all_blocks=1 00:22:44.058 --rc geninfo_unexecuted_blocks=1 00:22:44.058 00:22:44.058 ' 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:44.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.058 --rc genhtml_branch_coverage=1 00:22:44.058 --rc genhtml_function_coverage=1 00:22:44.058 --rc genhtml_legend=1 00:22:44.058 --rc geninfo_all_blocks=1 00:22:44.058 --rc geninfo_unexecuted_blocks=1 00:22:44.058 00:22:44.058 ' 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:44.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.058 --rc genhtml_branch_coverage=1 00:22:44.058 --rc genhtml_function_coverage=1 00:22:44.058 --rc genhtml_legend=1 00:22:44.058 --rc geninfo_all_blocks=1 00:22:44.058 --rc geninfo_unexecuted_blocks=1 00:22:44.058 00:22:44.058 ' 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:44.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:44.058 15:55:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:50.672 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:50.672 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:50.672 Found net devices under 0000:af:00.0: cvl_0_0 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:50.672 Found net devices under 0000:af:00.1: cvl_0_1 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:50.672 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:50.673 15:55:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:50.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:22:50.673 00:22:50.673 --- 10.0.0.2 ping statistics --- 00:22:50.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.673 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:22:50.673 00:22:50.673 --- 10.0.0.1 ping statistics --- 00:22:50.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.673 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2079718 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2079718 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2079718 ']' 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.673 [2024-12-09 15:55:45.194230] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:22:50.673 [2024-12-09 15:55:45.194272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.673 [2024-12-09 15:55:45.273427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:50.673 [2024-12-09 15:55:45.313720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.673 [2024-12-09 15:55:45.313758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.673 [2024-12-09 15:55:45.313764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.673 [2024-12-09 15:55:45.313770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.673 [2024-12-09 15:55:45.313775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.673 [2024-12-09 15:55:45.315355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.673 [2024-12-09 15:55:45.315460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.673 [2024-12-09 15:55:45.315586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.673 [2024-12-09 15:55:45.315588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.673 [2024-12-09 15:55:45.424945] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.673 Malloc0 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.673 [2024-12-09 15:55:45.520889] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.673 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.673 [ 00:22:50.673 { 00:22:50.673 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:50.673 "subtype": "Discovery", 00:22:50.673 "listen_addresses": [ 00:22:50.673 { 00:22:50.673 "trtype": "TCP", 00:22:50.673 "adrfam": "IPv4", 00:22:50.673 "traddr": "10.0.0.2", 00:22:50.673 "trsvcid": "4420" 00:22:50.673 } 00:22:50.673 ], 00:22:50.673 "allow_any_host": true, 00:22:50.673 "hosts": [] 00:22:50.673 }, 00:22:50.673 { 00:22:50.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.673 "subtype": "NVMe", 00:22:50.673 "listen_addresses": [ 00:22:50.673 { 00:22:50.673 "trtype": "TCP", 00:22:50.673 "adrfam": "IPv4", 00:22:50.673 "traddr": "10.0.0.2", 00:22:50.673 "trsvcid": "4420" 00:22:50.673 } 00:22:50.673 ], 00:22:50.673 "allow_any_host": true, 00:22:50.673 "hosts": [], 00:22:50.673 "serial_number": "SPDK00000000000001", 00:22:50.673 "model_number": "SPDK bdev Controller", 00:22:50.673 "max_namespaces": 32, 00:22:50.673 "min_cntlid": 1, 00:22:50.673 "max_cntlid": 65519, 00:22:50.674 "namespaces": [ 00:22:50.674 { 00:22:50.674 "nsid": 1, 00:22:50.674 "bdev_name": "Malloc0", 00:22:50.674 "name": "Malloc0", 00:22:50.674 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:50.674 "eui64": "ABCDEF0123456789", 00:22:50.674 "uuid": "bea4d07a-5710-42c0-93db-4eb6c03b9b3a" 00:22:50.674 } 00:22:50.674 ] 00:22:50.674 } 00:22:50.674 ] 00:22:50.674 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.674 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:50.674 [2024-12-09 15:55:45.570328] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:22:50.674 [2024-12-09 15:55:45.570375] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2079830 ] 00:22:50.674 [2024-12-09 15:55:45.612244] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:50.674 [2024-12-09 15:55:45.612291] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:50.674 [2024-12-09 15:55:45.612296] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:50.674 [2024-12-09 15:55:45.612311] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:50.674 [2024-12-09 15:55:45.612321] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:50.674 [2024-12-09 15:55:45.612820] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:50.674 [2024-12-09 15:55:45.612852] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xbad690 0 00:22:50.674 [2024-12-09 15:55:45.623226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:50.674 [2024-12-09 15:55:45.623241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:50.674 [2024-12-09 15:55:45.623245] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:50.674 [2024-12-09 15:55:45.623248] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:50.674 [2024-12-09 15:55:45.623280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.623285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.623288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbad690) 00:22:50.674 [2024-12-09 15:55:45.623300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:50.674 [2024-12-09 15:55:45.623318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f100, cid 0, qid 0 00:22:50.674 [2024-12-09 15:55:45.631228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.674 [2024-12-09 15:55:45.631236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.674 [2024-12-09 15:55:45.631240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.631244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f100) on tqpair=0xbad690 00:22:50.674 [2024-12-09 15:55:45.631256] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:50.674 [2024-12-09 15:55:45.631262] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:50.674 [2024-12-09 15:55:45.631266] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:50.674 [2024-12-09 15:55:45.631282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.631285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.631289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbad690) 00:22:50.674 [2024-12-09 15:55:45.631295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.674 [2024-12-09 15:55:45.631309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f100, cid 0, qid 0 00:22:50.674 [2024-12-09 15:55:45.631467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.674 [2024-12-09 15:55:45.631473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.674 [2024-12-09 15:55:45.631476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.631479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f100) on tqpair=0xbad690 00:22:50.674 [2024-12-09 15:55:45.631484] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:50.674 [2024-12-09 15:55:45.631490] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:50.674 [2024-12-09 15:55:45.631497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.631500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.631503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbad690) 00:22:50.674 [2024-12-09 15:55:45.631509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.674 [2024-12-09 15:55:45.631519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f100, cid 0, qid 0 00:22:50.674 [2024-12-09 15:55:45.631581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.674 [2024-12-09 15:55:45.631587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.674 [2024-12-09 15:55:45.631589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.631593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f100) on tqpair=0xbad690 00:22:50.674 [2024-12-09 15:55:45.631597] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:50.674 [2024-12-09 15:55:45.631604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:50.674 [2024-12-09 15:55:45.631610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.631614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.631616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbad690) 00:22:50.674 [2024-12-09 15:55:45.631622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.674 [2024-12-09 15:55:45.631632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f100, cid 0, qid 0 00:22:50.674 [2024-12-09 15:55:45.631692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.674 [2024-12-09 15:55:45.631697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.674 [2024-12-09 15:55:45.631700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.631703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f100) on tqpair=0xbad690 00:22:50.674 [2024-12-09 15:55:45.631708] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:50.674 [2024-12-09 15:55:45.631716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.631719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.631725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbad690) 00:22:50.674 [2024-12-09 15:55:45.631730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.674 [2024-12-09 15:55:45.631740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f100, cid 0, qid 0 00:22:50.674 [2024-12-09 15:55:45.631800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.674 [2024-12-09 15:55:45.631806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.674 [2024-12-09 15:55:45.631809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.631812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f100) on tqpair=0xbad690 00:22:50.674 [2024-12-09 15:55:45.631816] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:50.674 [2024-12-09 15:55:45.631820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:50.674 [2024-12-09 15:55:45.631827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:50.674 [2024-12-09 15:55:45.631936] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:50.674 [2024-12-09 15:55:45.631941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:50.674 [2024-12-09 15:55:45.631948] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.631951] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.631954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbad690) 00:22:50.674 [2024-12-09 15:55:45.631960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.674 [2024-12-09 15:55:45.631970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f100, cid 0, qid 0 00:22:50.674 [2024-12-09 15:55:45.632033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.674 [2024-12-09 15:55:45.632038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.674 [2024-12-09 15:55:45.632041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.632044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f100) on tqpair=0xbad690 00:22:50.674 [2024-12-09 15:55:45.632048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:50.674 [2024-12-09 15:55:45.632056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.632059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.674 [2024-12-09 15:55:45.632063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbad690) 00:22:50.674 [2024-12-09 15:55:45.632068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.674 [2024-12-09 15:55:45.632078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f100, cid 0, qid 0 00:22:50.674 [2024-12-09 15:55:45.632136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.674 [2024-12-09 15:55:45.632141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.674 [2024-12-09 15:55:45.632144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f100) on tqpair=0xbad690 00:22:50.675 [2024-12-09 15:55:45.632151] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:50.675 [2024-12-09 15:55:45.632156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:50.675 [2024-12-09 15:55:45.632164] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:50.675 [2024-12-09 15:55:45.632171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:50.675 [2024-12-09 15:55:45.632179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbad690) 00:22:50.675 [2024-12-09 15:55:45.632188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.675 [2024-12-09 15:55:45.632197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f100, cid 0, qid 0 00:22:50.675 [2024-12-09 15:55:45.632314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.675 [2024-12-09 15:55:45.632320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.675 [2024-12-09 15:55:45.632323] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632326] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbad690): datao=0, datal=4096, cccid=0 00:22:50.675 [2024-12-09 15:55:45.632330] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0f100) on tqpair(0xbad690): expected_datao=0, payload_size=4096 00:22:50.675 [2024-12-09 15:55:45.632335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632341] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632344] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.675 [2024-12-09 15:55:45.632365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.675 [2024-12-09 15:55:45.632368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f100) on tqpair=0xbad690 00:22:50.675 [2024-12-09 15:55:45.632377] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:50.675 [2024-12-09 15:55:45.632384] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:50.675 [2024-12-09 15:55:45.632388] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:50.675 [2024-12-09 15:55:45.632392] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:50.675 [2024-12-09 15:55:45.632396] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:50.675 [2024-12-09 15:55:45.632400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:50.675 [2024-12-09 15:55:45.632407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:50.675 [2024-12-09 15:55:45.632414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbad690) 00:22:50.675 [2024-12-09 15:55:45.632426] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:50.675 [2024-12-09 15:55:45.632437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f100, cid 0, qid 0 00:22:50.675 [2024-12-09 15:55:45.632505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.675 [2024-12-09 15:55:45.632510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.675 [2024-12-09 15:55:45.632515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f100) on tqpair=0xbad690 00:22:50.675 [2024-12-09 15:55:45.632524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbad690) 00:22:50.675 [2024-12-09 15:55:45.632536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.675 [2024-12-09 15:55:45.632541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xbad690) 00:22:50.675 [2024-12-09 15:55:45.632552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.675 [2024-12-09 15:55:45.632557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xbad690) 00:22:50.675 [2024-12-09 15:55:45.632568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.675 [2024-12-09 15:55:45.632573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.675 [2024-12-09 15:55:45.632584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.675 [2024-12-09 15:55:45.632588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:50.675 [2024-12-09 15:55:45.632598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:50.675 [2024-12-09 15:55:45.632604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbad690) 00:22:50.675 [2024-12-09 15:55:45.632612] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.675 [2024-12-09 15:55:45.632623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f100, cid 0, qid 0 00:22:50.675 [2024-12-09 15:55:45.632627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f280, cid 1, qid 0 00:22:50.675 [2024-12-09 15:55:45.632631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f400, cid 2, qid 0 00:22:50.675 [2024-12-09 15:55:45.632635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.675 [2024-12-09 15:55:45.632639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f700, cid 4, qid 0 00:22:50.675 [2024-12-09 15:55:45.632736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.675 [2024-12-09 15:55:45.632742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.675 [2024-12-09 15:55:45.632745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f700) on tqpair=0xbad690 00:22:50.675 [2024-12-09 15:55:45.632752] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:50.675 [2024-12-09 15:55:45.632758] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:50.675 [2024-12-09 15:55:45.632767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbad690) 00:22:50.675 [2024-12-09 15:55:45.632776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.675 [2024-12-09 15:55:45.632785] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f700, cid 4, qid 0 00:22:50.675 [2024-12-09 15:55:45.632854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.675 [2024-12-09 15:55:45.632859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.675 [2024-12-09 15:55:45.632862] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632865] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbad690): datao=0, datal=4096, cccid=4 00:22:50.675 [2024-12-09 15:55:45.632869] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0f700) on tqpair(0xbad690): expected_datao=0, payload_size=4096 00:22:50.675 [2024-12-09 15:55:45.632873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632883] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632887] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.675 [2024-12-09 15:55:45.632918] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.675 [2024-12-09 15:55:45.632921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632924] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f700) on tqpair=0xbad690 00:22:50.675 [2024-12-09 15:55:45.632934] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:50.675 [2024-12-09 15:55:45.632953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbad690) 00:22:50.675 [2024-12-09 15:55:45.632963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.675 [2024-12-09 15:55:45.632968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.675 [2024-12-09 15:55:45.632975] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbad690) 00:22:50.675 [2024-12-09 15:55:45.632980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.675 [2024-12-09 15:55:45.632992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f700, cid 4, qid 0 00:22:50.675 [2024-12-09 15:55:45.632997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f880, cid 5, qid 0 00:22:50.675 [2024-12-09 15:55:45.633096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.675 [2024-12-09 15:55:45.633101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.676 [2024-12-09 15:55:45.633104] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.633107] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbad690): datao=0, datal=1024, cccid=4 00:22:50.676 [2024-12-09 15:55:45.633111] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0f700) on tqpair(0xbad690): expected_datao=0, payload_size=1024 00:22:50.676 [2024-12-09 15:55:45.633115] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.633120] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.633123] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.633130] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.676 [2024-12-09 15:55:45.633134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.676 [2024-12-09 15:55:45.633137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.633140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f880) on tqpair=0xbad690 00:22:50.676 [2024-12-09 15:55:45.673353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.676 [2024-12-09 15:55:45.673366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.676 [2024-12-09 15:55:45.673369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.673373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f700) on tqpair=0xbad690 00:22:50.676 [2024-12-09 15:55:45.673385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.673389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbad690) 00:22:50.676 [2024-12-09 15:55:45.673396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.676 [2024-12-09 15:55:45.673412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f700, cid 4, qid 0 00:22:50.676 [2024-12-09 15:55:45.673486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.676 [2024-12-09 15:55:45.673492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.676 [2024-12-09 15:55:45.673495] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.673498] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbad690): datao=0, datal=3072, cccid=4 00:22:50.676 [2024-12-09 15:55:45.673503] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0f700) on tqpair(0xbad690): expected_datao=0, payload_size=3072 00:22:50.676 [2024-12-09 15:55:45.673506] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.673512] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.673516] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.673539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.676 [2024-12-09 15:55:45.673545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.676 [2024-12-09 15:55:45.673548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.673551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f700) on tqpair=0xbad690 00:22:50.676 [2024-12-09 15:55:45.673558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.673562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbad690) 00:22:50.676 [2024-12-09 15:55:45.673567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.676 [2024-12-09 15:55:45.673580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f700, cid 4, qid 0 00:22:50.676 [2024-12-09 15:55:45.673648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.676 [2024-12-09 15:55:45.673653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.676 [2024-12-09 15:55:45.673656] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.673659] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbad690): datao=0, datal=8, cccid=4 00:22:50.676 [2024-12-09 15:55:45.673663] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc0f700) on tqpair(0xbad690): expected_datao=0, payload_size=8 00:22:50.676 [2024-12-09 15:55:45.673666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.673672] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.673675] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.715228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.676 [2024-12-09 15:55:45.715245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.676 [2024-12-09 15:55:45.715249] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.676 [2024-12-09 15:55:45.715253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f700) on tqpair=0xbad690 00:22:50.676 ===================================================== 00:22:50.676 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:50.676 ===================================================== 00:22:50.676 Controller Capabilities/Features 00:22:50.676 ================================ 00:22:50.676 Vendor ID: 0000 00:22:50.676 Subsystem Vendor ID: 0000 00:22:50.676 Serial Number: .................... 00:22:50.676 Model Number: ........................................ 00:22:50.676 Firmware Version: 25.01 00:22:50.676 Recommended Arb Burst: 0 00:22:50.676 IEEE OUI Identifier: 00 00 00 00:22:50.676 Multi-path I/O 00:22:50.676 May have multiple subsystem ports: No 00:22:50.676 May have multiple controllers: No 00:22:50.676 Associated with SR-IOV VF: No 00:22:50.676 Max Data Transfer Size: 131072 00:22:50.676 Max Number of Namespaces: 0 00:22:50.676 Max Number of I/O Queues: 1024 00:22:50.676 NVMe Specification Version (VS): 1.3 00:22:50.676 NVMe Specification Version (Identify): 1.3 00:22:50.676 Maximum Queue Entries: 128 00:22:50.676 Contiguous Queues Required: Yes 00:22:50.676 Arbitration Mechanisms Supported 00:22:50.676 Weighted Round Robin: Not Supported 00:22:50.676 Vendor Specific: Not Supported 00:22:50.676 Reset Timeout: 15000 ms 00:22:50.676 Doorbell Stride: 4 bytes 00:22:50.676 NVM Subsystem Reset: Not Supported 00:22:50.676 Command Sets Supported 00:22:50.676 NVM Command Set: Supported 00:22:50.676 Boot Partition: Not Supported 00:22:50.676 Memory Page Size Minimum: 4096 bytes 00:22:50.676 Memory Page Size Maximum: 4096 bytes 00:22:50.676 Persistent Memory Region: Not Supported 00:22:50.676 Optional Asynchronous Events Supported 00:22:50.676 Namespace Attribute Notices: Not Supported 00:22:50.676 Firmware Activation Notices: Not Supported 00:22:50.676 ANA Change Notices: Not Supported 00:22:50.676 PLE Aggregate Log Change Notices: Not Supported 00:22:50.676 LBA Status Info Alert Notices: Not Supported 00:22:50.676 EGE Aggregate Log Change Notices: Not Supported 00:22:50.676 Normal NVM Subsystem Shutdown event: Not Supported 00:22:50.676 Zone Descriptor Change Notices: Not Supported 00:22:50.676 Discovery Log Change Notices: Supported 00:22:50.676 Controller Attributes 00:22:50.676 128-bit Host Identifier: Not Supported 00:22:50.676 Non-Operational Permissive Mode: Not Supported 00:22:50.676 NVM Sets: Not Supported 00:22:50.676 Read Recovery Levels: Not Supported 00:22:50.676 Endurance Groups: Not Supported 00:22:50.676 Predictable Latency Mode: Not Supported 00:22:50.676 Traffic Based Keep ALive: Not Supported 00:22:50.676 Namespace Granularity: Not Supported 00:22:50.676 SQ Associations: Not Supported 00:22:50.676 UUID List: Not Supported 00:22:50.676 Multi-Domain Subsystem: Not Supported 00:22:50.676 Fixed Capacity Management: Not Supported 00:22:50.676 Variable Capacity Management: Not Supported 00:22:50.676 Delete Endurance Group: Not Supported 00:22:50.676 Delete NVM Set: Not Supported 00:22:50.676 Extended LBA Formats Supported: Not Supported 00:22:50.676 Flexible Data Placement Supported: Not Supported 00:22:50.676 00:22:50.676 Controller Memory Buffer Support 00:22:50.676 ================================ 00:22:50.676 Supported: No 00:22:50.676 00:22:50.676 Persistent Memory Region Support 00:22:50.676 ================================ 00:22:50.676 Supported: No 00:22:50.676 00:22:50.676 Admin Command Set Attributes 00:22:50.676 ============================ 00:22:50.676 Security Send/Receive: Not Supported 00:22:50.676 Format NVM: Not Supported 00:22:50.676 Firmware Activate/Download: Not Supported 00:22:50.676 Namespace Management: Not Supported 00:22:50.676 Device Self-Test: Not Supported 00:22:50.676 Directives: Not Supported 00:22:50.676 NVMe-MI: Not Supported 00:22:50.676 Virtualization Management: Not Supported 00:22:50.676 Doorbell Buffer Config: Not Supported 00:22:50.676 Get LBA Status Capability: Not Supported 00:22:50.676 Command & Feature Lockdown Capability: Not Supported 00:22:50.677 Abort Command Limit: 1 00:22:50.677 Async Event Request Limit: 4 00:22:50.677 Number of Firmware Slots: N/A 00:22:50.677 Firmware Slot 1 Read-Only: N/A 00:22:50.677 Firmware Activation Without Reset: N/A 00:22:50.677 Multiple Update Detection Support: N/A 00:22:50.677 Firmware Update Granularity: No Information Provided 00:22:50.677 Per-Namespace SMART Log: No 00:22:50.677 Asymmetric Namespace Access Log Page: Not Supported 00:22:50.677 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:50.677 Command Effects Log Page: Not Supported 00:22:50.677 Get Log Page Extended Data: Supported 00:22:50.677 Telemetry Log Pages: Not Supported 00:22:50.677 Persistent Event Log Pages: Not Supported 00:22:50.677 Supported Log Pages Log Page: May Support 00:22:50.677 Commands Supported & Effects Log Page: Not Supported 00:22:50.677 Feature Identifiers & Effects Log Page:May Support 00:22:50.677 NVMe-MI Commands & Effects Log Page: May Support 00:22:50.677 Data Area 4 for Telemetry Log: Not Supported 00:22:50.677 Error Log Page Entries Supported: 128 00:22:50.677 Keep Alive: Not Supported 00:22:50.677 00:22:50.677 NVM Command Set Attributes 00:22:50.677 ========================== 00:22:50.677 Submission Queue Entry Size 00:22:50.677 Max: 1 00:22:50.677 Min: 1 00:22:50.677 Completion Queue Entry Size 00:22:50.677 Max: 1 00:22:50.677 Min: 1 00:22:50.677 Number of Namespaces: 0 00:22:50.677 Compare Command: Not Supported 00:22:50.677 Write Uncorrectable Command: Not Supported 00:22:50.677 Dataset Management Command: Not Supported 00:22:50.677 Write Zeroes Command: Not Supported 00:22:50.677 Set Features Save Field: Not Supported 00:22:50.677 Reservations: Not Supported 00:22:50.677 Timestamp: Not Supported 00:22:50.677 Copy: Not Supported 00:22:50.677 Volatile Write Cache: Not Present 00:22:50.677 Atomic Write Unit (Normal): 1 00:22:50.677 Atomic Write Unit (PFail): 1 00:22:50.677 Atomic Compare & Write Unit: 1 00:22:50.677 Fused Compare & Write: Supported 00:22:50.677 Scatter-Gather List 00:22:50.677 SGL Command Set: Supported 00:22:50.677 SGL Keyed: Supported 00:22:50.677 SGL Bit Bucket Descriptor: Not Supported 00:22:50.677 SGL Metadata Pointer: Not Supported 00:22:50.677 Oversized SGL: Not Supported 00:22:50.677 SGL Metadata Address: Not Supported 00:22:50.677 SGL Offset: Supported 00:22:50.677 Transport SGL Data Block: Not Supported 00:22:50.677 Replay Protected Memory Block: Not Supported 00:22:50.677 00:22:50.677 Firmware Slot Information 00:22:50.677 ========================= 00:22:50.677 Active slot: 0 00:22:50.677 00:22:50.677 00:22:50.677 Error Log 00:22:50.677 ========= 00:22:50.677 00:22:50.677 Active Namespaces 00:22:50.677 ================= 00:22:50.677 Discovery Log Page 00:22:50.677 ================== 00:22:50.677 Generation Counter: 2 00:22:50.677 Number of Records: 2 00:22:50.677 Record Format: 0 00:22:50.677 00:22:50.677 Discovery Log Entry 0 00:22:50.677 ---------------------- 00:22:50.677 Transport Type: 3 (TCP) 00:22:50.677 Address Family: 1 (IPv4) 00:22:50.677 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:50.677 Entry Flags: 00:22:50.677 Duplicate Returned Information: 1 00:22:50.677 Explicit Persistent Connection Support for Discovery: 1 00:22:50.677 Transport Requirements: 00:22:50.677 Secure Channel: Not Required 00:22:50.677 Port ID: 0 (0x0000) 00:22:50.677 Controller ID: 65535 (0xffff) 00:22:50.677 Admin Max SQ Size: 128 00:22:50.677 Transport Service Identifier: 4420 00:22:50.677 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:50.677 Transport Address: 10.0.0.2 00:22:50.677 Discovery Log Entry 1 00:22:50.677 ---------------------- 00:22:50.677 Transport Type: 3 (TCP) 00:22:50.677 Address Family: 1 (IPv4) 00:22:50.677 Subsystem Type: 2 (NVM Subsystem) 00:22:50.677 Entry Flags: 00:22:50.677 Duplicate Returned Information: 0 00:22:50.677 Explicit Persistent Connection Support for Discovery: 0 00:22:50.677 Transport Requirements: 00:22:50.677 Secure Channel: Not Required 00:22:50.677 Port ID: 0 (0x0000) 00:22:50.677 Controller ID: 65535 (0xffff) 00:22:50.677 Admin Max SQ Size: 128 00:22:50.677 Transport Service Identifier: 4420 00:22:50.677 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:50.677 Transport Address: 10.0.0.2 [2024-12-09 15:55:45.715331] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:50.677 [2024-12-09 15:55:45.715341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f100) on tqpair=0xbad690 00:22:50.677 [2024-12-09 15:55:45.715347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.677 [2024-12-09 15:55:45.715352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f280) on tqpair=0xbad690 00:22:50.677 [2024-12-09 15:55:45.715356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.677 [2024-12-09 15:55:45.715360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f400) on tqpair=0xbad690 00:22:50.677 [2024-12-09 15:55:45.715364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.677 [2024-12-09 15:55:45.715368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.677 [2024-12-09 15:55:45.715372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.677 [2024-12-09 15:55:45.715381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.677 [2024-12-09 15:55:45.715385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.677 [2024-12-09 15:55:45.715389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.677 [2024-12-09 15:55:45.715396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.677 [2024-12-09 15:55:45.715409] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.677 [2024-12-09 15:55:45.715469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.677 [2024-12-09 15:55:45.715475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.677 [2024-12-09 15:55:45.715478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.677 [2024-12-09 15:55:45.715482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.677 [2024-12-09 15:55:45.715488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.677 [2024-12-09 15:55:45.715491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.677 [2024-12-09 15:55:45.715494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.677 [2024-12-09 15:55:45.715499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.677 [2024-12-09 15:55:45.715511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.677 [2024-12-09 15:55:45.715582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.677 [2024-12-09 15:55:45.715588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.677 [2024-12-09 15:55:45.715591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.677 [2024-12-09 15:55:45.715594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.677 [2024-12-09 15:55:45.715598] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:50.677 [2024-12-09 15:55:45.715602] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:50.677 [2024-12-09 15:55:45.715610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.677 [2024-12-09 15:55:45.715613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.677 [2024-12-09 15:55:45.715616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.677 [2024-12-09 15:55:45.715623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.677 [2024-12-09 15:55:45.715633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.677 [2024-12-09 15:55:45.715696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.677 [2024-12-09 15:55:45.715701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.677 [2024-12-09 15:55:45.715704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.677 [2024-12-09 15:55:45.715707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.677 [2024-12-09 15:55:45.715716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.677 [2024-12-09 15:55:45.715719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.677 [2024-12-09 15:55:45.715722] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.677 [2024-12-09 15:55:45.715728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.677 [2024-12-09 15:55:45.715737] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.677 [2024-12-09 15:55:45.715796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.677 [2024-12-09 15:55:45.715802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.677 [2024-12-09 15:55:45.715805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.677 [2024-12-09 15:55:45.715808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.677 [2024-12-09 15:55:45.715816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.677 [2024-12-09 15:55:45.715820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.677 [2024-12-09 15:55:45.715822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.678 [2024-12-09 15:55:45.715828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.678 [2024-12-09 15:55:45.715837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.678 [2024-12-09 15:55:45.715894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.678 [2024-12-09 15:55:45.715900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.678 [2024-12-09 15:55:45.715902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.715906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.678 [2024-12-09 15:55:45.715914] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.715917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.715920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.678 [2024-12-09 15:55:45.715926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.678 [2024-12-09 15:55:45.715935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.678 [2024-12-09 15:55:45.716015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.678 [2024-12-09 15:55:45.716020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.678 [2024-12-09 15:55:45.716023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716026] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.678 [2024-12-09 15:55:45.716034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.678 [2024-12-09 15:55:45.716046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.678 [2024-12-09 15:55:45.716057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.678 [2024-12-09 15:55:45.716118] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.678 [2024-12-09 15:55:45.716124] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.678 [2024-12-09 15:55:45.716127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.678 [2024-12-09 15:55:45.716138] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.678 [2024-12-09 15:55:45.716149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.678 [2024-12-09 15:55:45.716159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.678 [2024-12-09 15:55:45.716222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.678 [2024-12-09 15:55:45.716228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.678 [2024-12-09 15:55:45.716230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.678 [2024-12-09 15:55:45.716242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.678 [2024-12-09 15:55:45.716254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.678 [2024-12-09 15:55:45.716263] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.678 [2024-12-09 15:55:45.716324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.678 [2024-12-09 15:55:45.716330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.678 [2024-12-09 15:55:45.716333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.678 [2024-12-09 15:55:45.716344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.678 [2024-12-09 15:55:45.716355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.678 [2024-12-09 15:55:45.716365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.678 [2024-12-09 15:55:45.716424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.678 [2024-12-09 15:55:45.716430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.678 [2024-12-09 15:55:45.716433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.678 [2024-12-09 15:55:45.716444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716451] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.678 [2024-12-09 15:55:45.716456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.678 [2024-12-09 15:55:45.716466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.678 [2024-12-09 15:55:45.716535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.678 [2024-12-09 15:55:45.716541] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.678 [2024-12-09 15:55:45.716543] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.678 [2024-12-09 15:55:45.716554] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.678 [2024-12-09 15:55:45.716566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.678 [2024-12-09 15:55:45.716575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.678 [2024-12-09 15:55:45.716633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.678 [2024-12-09 15:55:45.716638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.678 [2024-12-09 15:55:45.716641] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.678 [2024-12-09 15:55:45.716652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.678 [2024-12-09 15:55:45.716663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.678 [2024-12-09 15:55:45.716672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.678 [2024-12-09 15:55:45.716730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.678 [2024-12-09 15:55:45.716735] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.678 [2024-12-09 15:55:45.716738] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.678 [2024-12-09 15:55:45.716749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.678 [2024-12-09 15:55:45.716761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.678 [2024-12-09 15:55:45.716770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.678 [2024-12-09 15:55:45.716840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.678 [2024-12-09 15:55:45.716845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.678 [2024-12-09 15:55:45.716848] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.678 [2024-12-09 15:55:45.716860] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.678 [2024-12-09 15:55:45.716872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.678 [2024-12-09 15:55:45.716881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.678 [2024-12-09 15:55:45.716940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.678 [2024-12-09 15:55:45.716946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.678 [2024-12-09 15:55:45.716948] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716952] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.678 [2024-12-09 15:55:45.716959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.716966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.678 [2024-12-09 15:55:45.716971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.678 [2024-12-09 15:55:45.716980] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.678 [2024-12-09 15:55:45.717039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.678 [2024-12-09 15:55:45.717045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.678 [2024-12-09 15:55:45.717048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.717051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.678 [2024-12-09 15:55:45.717059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.717062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.678 [2024-12-09 15:55:45.717065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.678 [2024-12-09 15:55:45.717070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.678 [2024-12-09 15:55:45.717079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.679 [2024-12-09 15:55:45.717139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.679 [2024-12-09 15:55:45.717144] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.679 [2024-12-09 15:55:45.717147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.679 [2024-12-09 15:55:45.717158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717164] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.679 [2024-12-09 15:55:45.717170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.679 [2024-12-09 15:55:45.717179] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.679 [2024-12-09 15:55:45.717245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.679 [2024-12-09 15:55:45.717251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.679 [2024-12-09 15:55:45.717254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.679 [2024-12-09 15:55:45.717265] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.679 [2024-12-09 15:55:45.717276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.679 [2024-12-09 15:55:45.717286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.679 [2024-12-09 15:55:45.717354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.679 [2024-12-09 15:55:45.717361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.679 [2024-12-09 15:55:45.717364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.679 [2024-12-09 15:55:45.717375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.679 [2024-12-09 15:55:45.717387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.679 [2024-12-09 15:55:45.717396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.679 [2024-12-09 15:55:45.717458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.679 [2024-12-09 15:55:45.717463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.679 [2024-12-09 15:55:45.717466] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.679 [2024-12-09 15:55:45.717478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.679 [2024-12-09 15:55:45.717489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.679 [2024-12-09 15:55:45.717499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.679 [2024-12-09 15:55:45.717561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.679 [2024-12-09 15:55:45.717566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.679 [2024-12-09 15:55:45.717569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717573] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.679 [2024-12-09 15:55:45.717581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.679 [2024-12-09 15:55:45.717593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.679 [2024-12-09 15:55:45.717603] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.679 [2024-12-09 15:55:45.717661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.679 [2024-12-09 15:55:45.717666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.679 [2024-12-09 15:55:45.717669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717672] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.679 [2024-12-09 15:55:45.717680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.679 [2024-12-09 15:55:45.717692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.679 [2024-12-09 15:55:45.717701] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.679 [2024-12-09 15:55:45.717770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.679 [2024-12-09 15:55:45.717775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.679 [2024-12-09 15:55:45.717777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.679 [2024-12-09 15:55:45.717791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.679 [2024-12-09 15:55:45.717803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.679 [2024-12-09 15:55:45.717813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.679 [2024-12-09 15:55:45.717869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.679 [2024-12-09 15:55:45.717875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.679 [2024-12-09 15:55:45.717877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.679 [2024-12-09 15:55:45.717888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.679 [2024-12-09 15:55:45.717900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.679 [2024-12-09 15:55:45.717909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.679 [2024-12-09 15:55:45.717970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.679 [2024-12-09 15:55:45.717976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.679 [2024-12-09 15:55:45.717978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.679 [2024-12-09 15:55:45.717989] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.717996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.679 [2024-12-09 15:55:45.718001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.679 [2024-12-09 15:55:45.718010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.679 [2024-12-09 15:55:45.718070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.679 [2024-12-09 15:55:45.718075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.679 [2024-12-09 15:55:45.718078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.718081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.679 [2024-12-09 15:55:45.718089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.718092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.718095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.679 [2024-12-09 15:55:45.718101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.679 [2024-12-09 15:55:45.718110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.679 [2024-12-09 15:55:45.718169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.679 [2024-12-09 15:55:45.718175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.679 [2024-12-09 15:55:45.718178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.718181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.679 [2024-12-09 15:55:45.718190] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.718194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.718197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.679 [2024-12-09 15:55:45.718202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.679 [2024-12-09 15:55:45.718211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.679 [2024-12-09 15:55:45.718286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.679 [2024-12-09 15:55:45.718291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.679 [2024-12-09 15:55:45.718294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.718297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.679 [2024-12-09 15:55:45.718306] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.718309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.679 [2024-12-09 15:55:45.718312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.679 [2024-12-09 15:55:45.718317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.679 [2024-12-09 15:55:45.718327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.680 [2024-12-09 15:55:45.718390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.680 [2024-12-09 15:55:45.718395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.680 [2024-12-09 15:55:45.718398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.680 [2024-12-09 15:55:45.718410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.680 [2024-12-09 15:55:45.718421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.680 [2024-12-09 15:55:45.718430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.680 [2024-12-09 15:55:45.718498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.680 [2024-12-09 15:55:45.718504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.680 [2024-12-09 15:55:45.718507] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718510] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.680 [2024-12-09 15:55:45.718517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.680 [2024-12-09 15:55:45.718529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.680 [2024-12-09 15:55:45.718538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.680 [2024-12-09 15:55:45.718598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.680 [2024-12-09 15:55:45.718603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.680 [2024-12-09 15:55:45.718606] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.680 [2024-12-09 15:55:45.718617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718625] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.680 [2024-12-09 15:55:45.718630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.680 [2024-12-09 15:55:45.718639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.680 [2024-12-09 15:55:45.718699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.680 [2024-12-09 15:55:45.718704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.680 [2024-12-09 15:55:45.718707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.680 [2024-12-09 15:55:45.718718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.680 [2024-12-09 15:55:45.718730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.680 [2024-12-09 15:55:45.718739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.680 [2024-12-09 15:55:45.718799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.680 [2024-12-09 15:55:45.718805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.680 [2024-12-09 15:55:45.718807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.680 [2024-12-09 15:55:45.718818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.680 [2024-12-09 15:55:45.718830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.680 [2024-12-09 15:55:45.718839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.680 [2024-12-09 15:55:45.718904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.680 [2024-12-09 15:55:45.718910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.680 [2024-12-09 15:55:45.718913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.680 [2024-12-09 15:55:45.718924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.718930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.680 [2024-12-09 15:55:45.718935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.680 [2024-12-09 15:55:45.718944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.680 [2024-12-09 15:55:45.719007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.680 [2024-12-09 15:55:45.719012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.680 [2024-12-09 15:55:45.719015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.719019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.680 [2024-12-09 15:55:45.719026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.719030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.719034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.680 [2024-12-09 15:55:45.719040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.680 [2024-12-09 15:55:45.719049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.680 [2024-12-09 15:55:45.719106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.680 [2024-12-09 15:55:45.719111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.680 [2024-12-09 15:55:45.719114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.719118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.680 [2024-12-09 15:55:45.719125] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.719129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.719132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.680 [2024-12-09 15:55:45.719137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.680 [2024-12-09 15:55:45.719146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.680 [2024-12-09 15:55:45.719203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.680 [2024-12-09 15:55:45.719209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.680 [2024-12-09 15:55:45.719211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.719215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.680 [2024-12-09 15:55:45.723232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.723237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.723240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbad690) 00:22:50.680 [2024-12-09 15:55:45.723246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.680 [2024-12-09 15:55:45.723256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc0f580, cid 3, qid 0 00:22:50.680 [2024-12-09 15:55:45.723320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.680 [2024-12-09 15:55:45.723326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.680 [2024-12-09 15:55:45.723329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.723332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc0f580) on tqpair=0xbad690 00:22:50.680 [2024-12-09 15:55:45.723338] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:22:50.680 00:22:50.680 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:50.680 [2024-12-09 15:55:45.761348] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:22:50.680 [2024-12-09 15:55:45.761395] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2079832 ] 00:22:50.680 [2024-12-09 15:55:45.801370] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:50.680 [2024-12-09 15:55:45.801408] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:50.680 [2024-12-09 15:55:45.801415] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:50.680 [2024-12-09 15:55:45.801427] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:50.680 [2024-12-09 15:55:45.801435] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:50.680 [2024-12-09 15:55:45.805363] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:50.680 [2024-12-09 15:55:45.805388] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x23a5690 0 00:22:50.680 [2024-12-09 15:55:45.813229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:50.680 [2024-12-09 15:55:45.813240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:50.680 [2024-12-09 15:55:45.813244] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:50.680 [2024-12-09 15:55:45.813247] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:50.680 [2024-12-09 15:55:45.813273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.813278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.680 [2024-12-09 15:55:45.813282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a5690) 00:22:50.680 [2024-12-09 15:55:45.813291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:50.680 [2024-12-09 15:55:45.813307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407100, cid 0, qid 0 00:22:50.681 [2024-12-09 15:55:45.821228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.681 [2024-12-09 15:55:45.821235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.681 [2024-12-09 15:55:45.821238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.821242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407100) on tqpair=0x23a5690 00:22:50.681 [2024-12-09 15:55:45.821250] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:50.681 [2024-12-09 15:55:45.821255] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:50.681 [2024-12-09 15:55:45.821260] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:50.681 [2024-12-09 15:55:45.821270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.821274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.821277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a5690) 00:22:50.681 [2024-12-09 15:55:45.821283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.681 [2024-12-09 15:55:45.821296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407100, cid 0, qid 0 00:22:50.681 [2024-12-09 15:55:45.821453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.681 [2024-12-09 15:55:45.821458] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.681 [2024-12-09 15:55:45.821461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.821464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407100) on tqpair=0x23a5690 00:22:50.681 [2024-12-09 15:55:45.821469] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:50.681 [2024-12-09 15:55:45.821475] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:50.681 [2024-12-09 15:55:45.821481] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.821484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.821487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a5690) 00:22:50.681 [2024-12-09 15:55:45.821493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.681 [2024-12-09 15:55:45.821505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407100, cid 0, qid 0 00:22:50.681 [2024-12-09 15:55:45.821567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.681 [2024-12-09 15:55:45.821572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.681 [2024-12-09 15:55:45.821575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.821578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407100) on tqpair=0x23a5690 00:22:50.681 [2024-12-09 15:55:45.821582] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:50.681 [2024-12-09 15:55:45.821589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:50.681 [2024-12-09 15:55:45.821595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.821598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.821601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a5690) 00:22:50.681 [2024-12-09 15:55:45.821607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.681 [2024-12-09 15:55:45.821616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407100, cid 0, qid 0 00:22:50.681 [2024-12-09 15:55:45.821684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.681 [2024-12-09 15:55:45.821689] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.681 [2024-12-09 15:55:45.821692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.821695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407100) on tqpair=0x23a5690 00:22:50.681 [2024-12-09 15:55:45.821700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:50.681 [2024-12-09 15:55:45.821708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.821711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.821714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a5690) 00:22:50.681 [2024-12-09 15:55:45.821720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.681 [2024-12-09 15:55:45.821729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407100, cid 0, qid 0 00:22:50.681 [2024-12-09 15:55:45.821791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.681 [2024-12-09 15:55:45.821797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.681 [2024-12-09 15:55:45.821800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.821803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407100) on tqpair=0x23a5690 00:22:50.681 [2024-12-09 15:55:45.821807] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:50.681 [2024-12-09 15:55:45.821811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:50.681 [2024-12-09 15:55:45.821818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:50.681 [2024-12-09 15:55:45.821925] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:50.681 [2024-12-09 15:55:45.821930] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:50.681 [2024-12-09 15:55:45.821936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.821941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.821944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a5690) 00:22:50.681 [2024-12-09 15:55:45.821949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.681 [2024-12-09 15:55:45.821959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407100, cid 0, qid 0 00:22:50.681 [2024-12-09 15:55:45.822018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.681 [2024-12-09 15:55:45.822024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.681 [2024-12-09 15:55:45.822026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.822030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407100) on tqpair=0x23a5690 00:22:50.681 [2024-12-09 15:55:45.822033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:50.681 [2024-12-09 15:55:45.822041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.822045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.822048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a5690) 00:22:50.681 [2024-12-09 15:55:45.822053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.681 [2024-12-09 15:55:45.822063] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407100, cid 0, qid 0 00:22:50.681 [2024-12-09 15:55:45.822120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.681 [2024-12-09 15:55:45.822126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.681 [2024-12-09 15:55:45.822129] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.822132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407100) on tqpair=0x23a5690 00:22:50.681 [2024-12-09 15:55:45.822136] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:50.681 [2024-12-09 15:55:45.822140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:50.681 [2024-12-09 15:55:45.822146] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:50.681 [2024-12-09 15:55:45.822154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:50.681 [2024-12-09 15:55:45.822162] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.822166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a5690) 00:22:50.681 [2024-12-09 15:55:45.822171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.681 [2024-12-09 15:55:45.822180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407100, cid 0, qid 0 00:22:50.681 [2024-12-09 15:55:45.822306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.681 [2024-12-09 15:55:45.822312] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.681 [2024-12-09 15:55:45.822315] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.681 [2024-12-09 15:55:45.822318] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a5690): datao=0, datal=4096, cccid=0 00:22:50.681 [2024-12-09 15:55:45.822322] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2407100) on tqpair(0x23a5690): expected_datao=0, payload_size=4096 00:22:50.682 [2024-12-09 15:55:45.822325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.822335] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.822339] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.682 [2024-12-09 15:55:45.863379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.682 [2024-12-09 15:55:45.863382] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863385] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407100) on tqpair=0x23a5690 00:22:50.682 [2024-12-09 15:55:45.863393] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:50.682 [2024-12-09 15:55:45.863400] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:50.682 [2024-12-09 15:55:45.863404] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:50.682 [2024-12-09 15:55:45.863407] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:50.682 [2024-12-09 15:55:45.863411] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:50.682 [2024-12-09 15:55:45.863415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:50.682 [2024-12-09 15:55:45.863424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:50.682 [2024-12-09 15:55:45.863431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a5690) 00:22:50.682 [2024-12-09 15:55:45.863445] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:50.682 [2024-12-09 15:55:45.863457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407100, cid 0, qid 0 00:22:50.682 [2024-12-09 15:55:45.863533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.682 [2024-12-09 15:55:45.863539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.682 [2024-12-09 15:55:45.863542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407100) on tqpair=0x23a5690 00:22:50.682 [2024-12-09 15:55:45.863551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a5690) 00:22:50.682 [2024-12-09 15:55:45.863562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.682 [2024-12-09 15:55:45.863567] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x23a5690) 00:22:50.682 [2024-12-09 15:55:45.863578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.682 [2024-12-09 15:55:45.863583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x23a5690) 00:22:50.682 [2024-12-09 15:55:45.863594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.682 [2024-12-09 15:55:45.863599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863603] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863608] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.682 [2024-12-09 15:55:45.863612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.682 [2024-12-09 15:55:45.863616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:50.682 [2024-12-09 15:55:45.863626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:50.682 [2024-12-09 15:55:45.863632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a5690) 00:22:50.682 [2024-12-09 15:55:45.863641] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.682 [2024-12-09 15:55:45.863652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407100, cid 0, qid 0 00:22:50.682 [2024-12-09 15:55:45.863656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407280, cid 1, qid 0 00:22:50.682 [2024-12-09 15:55:45.863660] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407400, cid 2, qid 0 00:22:50.682 [2024-12-09 15:55:45.863664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.682 [2024-12-09 15:55:45.863668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407700, cid 4, qid 0 00:22:50.682 [2024-12-09 15:55:45.863761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.682 [2024-12-09 15:55:45.863766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.682 [2024-12-09 15:55:45.863769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407700) on tqpair=0x23a5690 00:22:50.682 [2024-12-09 15:55:45.863777] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:50.682 [2024-12-09 15:55:45.863781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:50.682 [2024-12-09 15:55:45.863788] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:50.682 [2024-12-09 15:55:45.863794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:50.682 [2024-12-09 15:55:45.863799] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a5690) 00:22:50.682 [2024-12-09 15:55:45.863811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:50.682 [2024-12-09 15:55:45.863820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407700, cid 4, qid 0 00:22:50.682 [2024-12-09 15:55:45.863885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.682 [2024-12-09 15:55:45.863890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.682 [2024-12-09 15:55:45.863893] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407700) on tqpair=0x23a5690 00:22:50.682 [2024-12-09 15:55:45.863948] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:50.682 [2024-12-09 15:55:45.863958] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:50.682 [2024-12-09 15:55:45.863966] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.863969] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a5690) 00:22:50.682 [2024-12-09 15:55:45.863975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.682 [2024-12-09 15:55:45.863984] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407700, cid 4, qid 0 00:22:50.682 [2024-12-09 15:55:45.864059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.682 [2024-12-09 15:55:45.864065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.682 [2024-12-09 15:55:45.864068] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.864071] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a5690): datao=0, datal=4096, cccid=4 00:22:50.682 [2024-12-09 15:55:45.864075] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2407700) on tqpair(0x23a5690): expected_datao=0, payload_size=4096 00:22:50.682 [2024-12-09 15:55:45.864078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.864084] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.864087] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.864096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.682 [2024-12-09 15:55:45.864101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.682 [2024-12-09 15:55:45.864104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.864107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407700) on tqpair=0x23a5690 00:22:50.682 [2024-12-09 15:55:45.864115] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:50.682 [2024-12-09 15:55:45.864124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:50.682 [2024-12-09 15:55:45.864131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:50.682 [2024-12-09 15:55:45.864137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.864141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a5690) 00:22:50.682 [2024-12-09 15:55:45.864146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.682 [2024-12-09 15:55:45.864157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407700, cid 4, qid 0 00:22:50.682 [2024-12-09 15:55:45.864243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.682 [2024-12-09 15:55:45.864250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.682 [2024-12-09 15:55:45.864253] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.864256] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a5690): datao=0, datal=4096, cccid=4 00:22:50.682 [2024-12-09 15:55:45.864259] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2407700) on tqpair(0x23a5690): expected_datao=0, payload_size=4096 00:22:50.682 [2024-12-09 15:55:45.864263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.864269] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.864272] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.864282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.682 [2024-12-09 15:55:45.864287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.682 [2024-12-09 15:55:45.864290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.682 [2024-12-09 15:55:45.864293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407700) on tqpair=0x23a5690 00:22:50.683 [2024-12-09 15:55:45.864304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:50.683 [2024-12-09 15:55:45.864314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:50.683 [2024-12-09 15:55:45.864320] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.864323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a5690) 00:22:50.683 [2024-12-09 15:55:45.864329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.683 [2024-12-09 15:55:45.864339] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407700, cid 4, qid 0 00:22:50.683 [2024-12-09 15:55:45.864412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.683 [2024-12-09 15:55:45.864418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.683 [2024-12-09 15:55:45.864421] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.864424] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a5690): datao=0, datal=4096, cccid=4 00:22:50.683 [2024-12-09 15:55:45.864428] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2407700) on tqpair(0x23a5690): expected_datao=0, payload_size=4096 00:22:50.683 [2024-12-09 15:55:45.864432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.864437] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.864440] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.864453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.683 [2024-12-09 15:55:45.864459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.683 [2024-12-09 15:55:45.864462] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.864465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407700) on tqpair=0x23a5690 00:22:50.683 [2024-12-09 15:55:45.864471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:50.683 [2024-12-09 15:55:45.864478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:50.683 [2024-12-09 15:55:45.864485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:50.683 [2024-12-09 15:55:45.864492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:50.683 [2024-12-09 15:55:45.864496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:50.683 [2024-12-09 15:55:45.864501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:50.683 [2024-12-09 15:55:45.864505] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:50.683 [2024-12-09 15:55:45.864509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:50.683 [2024-12-09 15:55:45.864514] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:50.683 [2024-12-09 15:55:45.864525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.864529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a5690) 00:22:50.683 [2024-12-09 15:55:45.864534] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.683 [2024-12-09 15:55:45.864540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.864547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.864550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23a5690) 00:22:50.683 [2024-12-09 15:55:45.864555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.683 [2024-12-09 15:55:45.864567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407700, cid 4, qid 0 00:22:50.683 [2024-12-09 15:55:45.864572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407880, cid 5, qid 0 00:22:50.683 [2024-12-09 15:55:45.864654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.683 [2024-12-09 15:55:45.864660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.683 [2024-12-09 15:55:45.864663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.864666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407700) on tqpair=0x23a5690 00:22:50.683 [2024-12-09 15:55:45.864671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.683 [2024-12-09 15:55:45.864676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.683 [2024-12-09 15:55:45.864679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.864682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407880) on tqpair=0x23a5690 00:22:50.683 [2024-12-09 15:55:45.864690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.864693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23a5690) 00:22:50.683 [2024-12-09 15:55:45.864698] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.683 [2024-12-09 15:55:45.864708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407880, cid 5, qid 0 00:22:50.683 [2024-12-09 15:55:45.868223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.683 [2024-12-09 15:55:45.868230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.683 [2024-12-09 15:55:45.868233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.868237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407880) on tqpair=0x23a5690 00:22:50.683 [2024-12-09 15:55:45.868247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.868250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23a5690) 00:22:50.683 [2024-12-09 15:55:45.868256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.683 [2024-12-09 15:55:45.868266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407880, cid 5, qid 0 00:22:50.683 [2024-12-09 15:55:45.868416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.683 [2024-12-09 15:55:45.868422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.683 [2024-12-09 15:55:45.868425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.868428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407880) on tqpair=0x23a5690 00:22:50.683 [2024-12-09 15:55:45.868435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.868439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23a5690) 00:22:50.683 [2024-12-09 15:55:45.868444] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.683 [2024-12-09 15:55:45.868453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407880, cid 5, qid 0 00:22:50.683 [2024-12-09 15:55:45.868556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.683 [2024-12-09 15:55:45.868561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.683 [2024-12-09 15:55:45.868564] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.868572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407880) on tqpair=0x23a5690 00:22:50.683 [2024-12-09 15:55:45.868584] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.868588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23a5690) 00:22:50.683 [2024-12-09 15:55:45.868594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.683 [2024-12-09 15:55:45.868600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.868603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a5690) 00:22:50.683 [2024-12-09 15:55:45.868608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.683 [2024-12-09 15:55:45.868614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.868617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x23a5690) 00:22:50.683 [2024-12-09 15:55:45.868622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.683 [2024-12-09 15:55:45.868628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.868631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x23a5690) 00:22:50.683 [2024-12-09 15:55:45.868637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.683 [2024-12-09 15:55:45.868647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407880, cid 5, qid 0 00:22:50.683 [2024-12-09 15:55:45.868652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407700, cid 4, qid 0 00:22:50.683 [2024-12-09 15:55:45.868656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407a00, cid 6, qid 0 00:22:50.683 [2024-12-09 15:55:45.868660] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407b80, cid 7, qid 0 00:22:50.683 [2024-12-09 15:55:45.868791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.683 [2024-12-09 15:55:45.868797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.683 [2024-12-09 15:55:45.868800] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.868802] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a5690): datao=0, datal=8192, cccid=5 00:22:50.683 [2024-12-09 15:55:45.868806] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2407880) on tqpair(0x23a5690): expected_datao=0, payload_size=8192 00:22:50.683 [2024-12-09 15:55:45.868810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.868834] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.868838] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.868846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.683 [2024-12-09 15:55:45.868851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.683 [2024-12-09 15:55:45.868854] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.868857] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a5690): datao=0, datal=512, cccid=4 00:22:50.683 [2024-12-09 15:55:45.868860] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2407700) on tqpair(0x23a5690): expected_datao=0, payload_size=512 00:22:50.683 [2024-12-09 15:55:45.868864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.868869] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.868872] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.683 [2024-12-09 15:55:45.868877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.683 [2024-12-09 15:55:45.868883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.683 [2024-12-09 15:55:45.868886] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.684 [2024-12-09 15:55:45.868889] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a5690): datao=0, datal=512, cccid=6 00:22:50.684 [2024-12-09 15:55:45.868893] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2407a00) on tqpair(0x23a5690): expected_datao=0, payload_size=512 00:22:50.684 [2024-12-09 15:55:45.868896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.684 [2024-12-09 15:55:45.868901] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.684 [2024-12-09 15:55:45.868904] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.684 [2024-12-09 15:55:45.868909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.684 [2024-12-09 15:55:45.868914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.684 [2024-12-09 15:55:45.868916] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.684 [2024-12-09 15:55:45.868919] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a5690): datao=0, datal=4096, cccid=7 00:22:50.684 [2024-12-09 15:55:45.868923] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2407b80) on tqpair(0x23a5690): expected_datao=0, payload_size=4096 00:22:50.684 [2024-12-09 15:55:45.868927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.684 [2024-12-09 15:55:45.868932] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.684 [2024-12-09 15:55:45.868935] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.684 [2024-12-09 15:55:45.868953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.684 [2024-12-09 15:55:45.868958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.684 [2024-12-09 15:55:45.868961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.684 [2024-12-09 15:55:45.868964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407880) on tqpair=0x23a5690 00:22:50.684 [2024-12-09 15:55:45.868974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.684 [2024-12-09 15:55:45.868979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.684 [2024-12-09 15:55:45.868982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.684 [2024-12-09 15:55:45.868985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407700) on tqpair=0x23a5690 00:22:50.684 [2024-12-09 15:55:45.868993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.684 [2024-12-09 15:55:45.868998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.684 [2024-12-09 15:55:45.869001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.684 [2024-12-09 15:55:45.869004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407a00) on tqpair=0x23a5690 00:22:50.684 [2024-12-09 15:55:45.869010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.684 [2024-12-09 15:55:45.869014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.684 [2024-12-09 15:55:45.869017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.684 [2024-12-09 15:55:45.869020] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407b80) on tqpair=0x23a5690 00:22:50.684 ===================================================== 00:22:50.684 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.684 ===================================================== 00:22:50.684 Controller Capabilities/Features 00:22:50.684 ================================ 00:22:50.684 Vendor ID: 8086 00:22:50.684 Subsystem Vendor ID: 8086 00:22:50.684 Serial Number: SPDK00000000000001 00:22:50.684 Model Number: SPDK bdev Controller 00:22:50.684 Firmware Version: 25.01 00:22:50.684 Recommended Arb Burst: 6 00:22:50.684 IEEE OUI Identifier: e4 d2 5c 00:22:50.684 Multi-path I/O 00:22:50.684 May have multiple subsystem ports: Yes 00:22:50.684 May have multiple controllers: Yes 00:22:50.684 Associated with SR-IOV VF: No 00:22:50.684 Max Data Transfer Size: 131072 00:22:50.684 Max Number of Namespaces: 32 00:22:50.684 Max Number of I/O Queues: 127 00:22:50.684 NVMe Specification Version (VS): 1.3 00:22:50.684 NVMe Specification Version (Identify): 1.3 00:22:50.684 Maximum Queue Entries: 128 00:22:50.684 Contiguous Queues Required: Yes 00:22:50.684 Arbitration Mechanisms Supported 00:22:50.684 Weighted Round Robin: Not Supported 00:22:50.684 Vendor Specific: Not Supported 00:22:50.684 Reset Timeout: 15000 ms 00:22:50.684 Doorbell Stride: 4 bytes 00:22:50.684 NVM Subsystem Reset: Not Supported 00:22:50.684 Command Sets Supported 00:22:50.684 NVM Command Set: Supported 00:22:50.684 Boot Partition: Not Supported 00:22:50.684 Memory Page Size Minimum: 4096 bytes 00:22:50.684 Memory Page Size Maximum: 4096 bytes 00:22:50.684 Persistent Memory Region: Not Supported 00:22:50.684 Optional Asynchronous Events Supported 00:22:50.684 Namespace Attribute Notices: Supported 00:22:50.684 Firmware Activation Notices: Not Supported 00:22:50.684 ANA Change Notices: Not Supported 00:22:50.684 PLE Aggregate Log Change Notices: Not Supported 00:22:50.684 LBA Status Info Alert Notices: Not Supported 00:22:50.684 EGE Aggregate Log Change Notices: Not Supported 00:22:50.684 Normal NVM Subsystem Shutdown event: Not Supported 00:22:50.684 Zone Descriptor Change Notices: Not Supported 00:22:50.684 Discovery Log Change Notices: Not Supported 00:22:50.684 Controller Attributes 00:22:50.684 128-bit Host Identifier: Supported 00:22:50.684 Non-Operational Permissive Mode: Not Supported 00:22:50.684 NVM Sets: Not Supported 00:22:50.684 Read Recovery Levels: Not Supported 00:22:50.684 Endurance Groups: Not Supported 00:22:50.684 Predictable Latency Mode: Not Supported 00:22:50.684 Traffic Based Keep ALive: Not Supported 00:22:50.684 Namespace Granularity: Not Supported 00:22:50.684 SQ Associations: Not Supported 00:22:50.684 UUID List: Not Supported 00:22:50.684 Multi-Domain Subsystem: Not Supported 00:22:50.684 Fixed Capacity Management: Not Supported 00:22:50.684 Variable Capacity Management: Not Supported 00:22:50.684 Delete Endurance Group: Not Supported 00:22:50.684 Delete NVM Set: Not Supported 00:22:50.684 Extended LBA Formats Supported: Not Supported 00:22:50.684 Flexible Data Placement Supported: Not Supported 00:22:50.684 00:22:50.684 Controller Memory Buffer Support 00:22:50.684 ================================ 00:22:50.684 Supported: No 00:22:50.684 00:22:50.684 Persistent Memory Region Support 00:22:50.684 ================================ 00:22:50.684 Supported: No 00:22:50.684 00:22:50.684 Admin Command Set Attributes 00:22:50.684 ============================ 00:22:50.684 Security Send/Receive: Not Supported 00:22:50.684 Format NVM: Not Supported 00:22:50.684 Firmware Activate/Download: Not Supported 00:22:50.684 Namespace Management: Not Supported 00:22:50.684 Device Self-Test: Not Supported 00:22:50.684 Directives: Not Supported 00:22:50.684 NVMe-MI: Not Supported 00:22:50.684 Virtualization Management: Not Supported 00:22:50.684 Doorbell Buffer Config: Not Supported 00:22:50.684 Get LBA Status Capability: Not Supported 00:22:50.684 Command & Feature Lockdown Capability: Not Supported 00:22:50.684 Abort Command Limit: 4 00:22:50.684 Async Event Request Limit: 4 00:22:50.684 Number of Firmware Slots: N/A 00:22:50.684 Firmware Slot 1 Read-Only: N/A 00:22:50.684 Firmware Activation Without Reset: N/A 00:22:50.684 Multiple Update Detection Support: N/A 00:22:50.684 Firmware Update Granularity: No Information Provided 00:22:50.684 Per-Namespace SMART Log: No 00:22:50.684 Asymmetric Namespace Access Log Page: Not Supported 00:22:50.684 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:50.684 Command Effects Log Page: Supported 00:22:50.684 Get Log Page Extended Data: Supported 00:22:50.684 Telemetry Log Pages: Not Supported 00:22:50.684 Persistent Event Log Pages: Not Supported 00:22:50.684 Supported Log Pages Log Page: May Support 00:22:50.684 Commands Supported & Effects Log Page: Not Supported 00:22:50.684 Feature Identifiers & Effects Log Page:May Support 00:22:50.684 NVMe-MI Commands & Effects Log Page: May Support 00:22:50.684 Data Area 4 for Telemetry Log: Not Supported 00:22:50.684 Error Log Page Entries Supported: 128 00:22:50.684 Keep Alive: Supported 00:22:50.684 Keep Alive Granularity: 10000 ms 00:22:50.684 00:22:50.684 NVM Command Set Attributes 00:22:50.684 ========================== 00:22:50.684 Submission Queue Entry Size 00:22:50.684 Max: 64 00:22:50.684 Min: 64 00:22:50.684 Completion Queue Entry Size 00:22:50.684 Max: 16 00:22:50.684 Min: 16 00:22:50.684 Number of Namespaces: 32 00:22:50.684 Compare Command: Supported 00:22:50.684 Write Uncorrectable Command: Not Supported 00:22:50.684 Dataset Management Command: Supported 00:22:50.684 Write Zeroes Command: Supported 00:22:50.684 Set Features Save Field: Not Supported 00:22:50.684 Reservations: Supported 00:22:50.684 Timestamp: Not Supported 00:22:50.684 Copy: Supported 00:22:50.684 Volatile Write Cache: Present 00:22:50.684 Atomic Write Unit (Normal): 1 00:22:50.684 Atomic Write Unit (PFail): 1 00:22:50.684 Atomic Compare & Write Unit: 1 00:22:50.684 Fused Compare & Write: Supported 00:22:50.684 Scatter-Gather List 00:22:50.684 SGL Command Set: Supported 00:22:50.684 SGL Keyed: Supported 00:22:50.684 SGL Bit Bucket Descriptor: Not Supported 00:22:50.684 SGL Metadata Pointer: Not Supported 00:22:50.684 Oversized SGL: Not Supported 00:22:50.684 SGL Metadata Address: Not Supported 00:22:50.684 SGL Offset: Supported 00:22:50.684 Transport SGL Data Block: Not Supported 00:22:50.684 Replay Protected Memory Block: Not Supported 00:22:50.684 00:22:50.684 Firmware Slot Information 00:22:50.684 ========================= 00:22:50.684 Active slot: 1 00:22:50.684 Slot 1 Firmware Revision: 25.01 00:22:50.684 00:22:50.684 00:22:50.684 Commands Supported and Effects 00:22:50.684 ============================== 00:22:50.684 Admin Commands 00:22:50.684 -------------- 00:22:50.685 Get Log Page (02h): Supported 00:22:50.685 Identify (06h): Supported 00:22:50.685 Abort (08h): Supported 00:22:50.685 Set Features (09h): Supported 00:22:50.685 Get Features (0Ah): Supported 00:22:50.685 Asynchronous Event Request (0Ch): Supported 00:22:50.685 Keep Alive (18h): Supported 00:22:50.685 I/O Commands 00:22:50.685 ------------ 00:22:50.685 Flush (00h): Supported LBA-Change 00:22:50.685 Write (01h): Supported LBA-Change 00:22:50.685 Read (02h): Supported 00:22:50.685 Compare (05h): Supported 00:22:50.685 Write Zeroes (08h): Supported LBA-Change 00:22:50.685 Dataset Management (09h): Supported LBA-Change 00:22:50.685 Copy (19h): Supported LBA-Change 00:22:50.685 00:22:50.685 Error Log 00:22:50.685 ========= 00:22:50.685 00:22:50.685 Arbitration 00:22:50.685 =========== 00:22:50.685 Arbitration Burst: 1 00:22:50.685 00:22:50.685 Power Management 00:22:50.685 ================ 00:22:50.685 Number of Power States: 1 00:22:50.685 Current Power State: Power State #0 00:22:50.685 Power State #0: 00:22:50.685 Max Power: 0.00 W 00:22:50.685 Non-Operational State: Operational 00:22:50.685 Entry Latency: Not Reported 00:22:50.685 Exit Latency: Not Reported 00:22:50.685 Relative Read Throughput: 0 00:22:50.685 Relative Read Latency: 0 00:22:50.685 Relative Write Throughput: 0 00:22:50.685 Relative Write Latency: 0 00:22:50.685 Idle Power: Not Reported 00:22:50.685 Active Power: Not Reported 00:22:50.685 Non-Operational Permissive Mode: Not Supported 00:22:50.685 00:22:50.685 Health Information 00:22:50.685 ================== 00:22:50.685 Critical Warnings: 00:22:50.685 Available Spare Space: OK 00:22:50.685 Temperature: OK 00:22:50.685 Device Reliability: OK 00:22:50.685 Read Only: No 00:22:50.685 Volatile Memory Backup: OK 00:22:50.685 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:50.685 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:50.685 Available Spare: 0% 00:22:50.685 Available Spare Threshold: 0% 00:22:50.685 Life Percentage Used:[2024-12-09 15:55:45.869097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x23a5690) 00:22:50.685 [2024-12-09 15:55:45.869107] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.685 [2024-12-09 15:55:45.869118] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407b80, cid 7, qid 0 00:22:50.685 [2024-12-09 15:55:45.869245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.685 [2024-12-09 15:55:45.869251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.685 [2024-12-09 15:55:45.869254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407b80) on tqpair=0x23a5690 00:22:50.685 [2024-12-09 15:55:45.869289] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:50.685 [2024-12-09 15:55:45.869298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407100) on tqpair=0x23a5690 00:22:50.685 [2024-12-09 15:55:45.869303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.685 [2024-12-09 15:55:45.869307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407280) on tqpair=0x23a5690 00:22:50.685 [2024-12-09 15:55:45.869311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.685 [2024-12-09 15:55:45.869315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407400) on tqpair=0x23a5690 00:22:50.685 [2024-12-09 15:55:45.869319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.685 [2024-12-09 15:55:45.869323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.685 [2024-12-09 15:55:45.869327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.685 [2024-12-09 15:55:45.869334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.685 [2024-12-09 15:55:45.869346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.685 [2024-12-09 15:55:45.869357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.685 [2024-12-09 15:55:45.869437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.685 [2024-12-09 15:55:45.869443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.685 [2024-12-09 15:55:45.869446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.685 [2024-12-09 15:55:45.869454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.685 [2024-12-09 15:55:45.869466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.685 [2024-12-09 15:55:45.869477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.685 [2024-12-09 15:55:45.869584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.685 [2024-12-09 15:55:45.869590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.685 [2024-12-09 15:55:45.869593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.685 [2024-12-09 15:55:45.869600] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:50.685 [2024-12-09 15:55:45.869604] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:50.685 [2024-12-09 15:55:45.869611] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.685 [2024-12-09 15:55:45.869623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.685 [2024-12-09 15:55:45.869634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.685 [2024-12-09 15:55:45.869695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.685 [2024-12-09 15:55:45.869700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.685 [2024-12-09 15:55:45.869703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.685 [2024-12-09 15:55:45.869714] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.685 [2024-12-09 15:55:45.869726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.685 [2024-12-09 15:55:45.869735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.685 [2024-12-09 15:55:45.869837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.685 [2024-12-09 15:55:45.869842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.685 [2024-12-09 15:55:45.869845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.685 [2024-12-09 15:55:45.869856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.685 [2024-12-09 15:55:45.869868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.685 [2024-12-09 15:55:45.869877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.685 [2024-12-09 15:55:45.869938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.685 [2024-12-09 15:55:45.869944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.685 [2024-12-09 15:55:45.869947] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.685 [2024-12-09 15:55:45.869957] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.869964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.685 [2024-12-09 15:55:45.869969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.685 [2024-12-09 15:55:45.869978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.685 [2024-12-09 15:55:45.870090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.685 [2024-12-09 15:55:45.870095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.685 [2024-12-09 15:55:45.870098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.870101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.685 [2024-12-09 15:55:45.870109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.685 [2024-12-09 15:55:45.870113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.686 [2024-12-09 15:55:45.870121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.686 [2024-12-09 15:55:45.870130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.686 [2024-12-09 15:55:45.870193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.686 [2024-12-09 15:55:45.870199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.686 [2024-12-09 15:55:45.870202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.686 [2024-12-09 15:55:45.870213] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.686 [2024-12-09 15:55:45.870231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.686 [2024-12-09 15:55:45.870241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.686 [2024-12-09 15:55:45.870342] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.686 [2024-12-09 15:55:45.870347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.686 [2024-12-09 15:55:45.870350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.686 [2024-12-09 15:55:45.870362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870368] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.686 [2024-12-09 15:55:45.870373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.686 [2024-12-09 15:55:45.870382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.686 [2024-12-09 15:55:45.870493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.686 [2024-12-09 15:55:45.870499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.686 [2024-12-09 15:55:45.870501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.686 [2024-12-09 15:55:45.870512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.686 [2024-12-09 15:55:45.870524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.686 [2024-12-09 15:55:45.870533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.686 [2024-12-09 15:55:45.870644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.686 [2024-12-09 15:55:45.870649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.686 [2024-12-09 15:55:45.870652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.686 [2024-12-09 15:55:45.870663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.686 [2024-12-09 15:55:45.870675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.686 [2024-12-09 15:55:45.870684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.686 [2024-12-09 15:55:45.870751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.686 [2024-12-09 15:55:45.870758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.686 [2024-12-09 15:55:45.870761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.686 [2024-12-09 15:55:45.870772] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.686 [2024-12-09 15:55:45.870783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.686 [2024-12-09 15:55:45.870792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.686 [2024-12-09 15:55:45.870895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.686 [2024-12-09 15:55:45.870901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.686 [2024-12-09 15:55:45.870904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.686 [2024-12-09 15:55:45.870915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.870921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.686 [2024-12-09 15:55:45.870927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.686 [2024-12-09 15:55:45.870936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.686 [2024-12-09 15:55:45.870997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.686 [2024-12-09 15:55:45.871003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.686 [2024-12-09 15:55:45.871006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.871009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.686 [2024-12-09 15:55:45.871017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.871020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.871023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.686 [2024-12-09 15:55:45.871028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.686 [2024-12-09 15:55:45.871037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.686 [2024-12-09 15:55:45.871148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.686 [2024-12-09 15:55:45.871154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.686 [2024-12-09 15:55:45.871157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.871160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.686 [2024-12-09 15:55:45.871167] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.871171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.871174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.686 [2024-12-09 15:55:45.871179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.686 [2024-12-09 15:55:45.871188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.686 [2024-12-09 15:55:45.871257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.686 [2024-12-09 15:55:45.871263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.686 [2024-12-09 15:55:45.871267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.871271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.686 [2024-12-09 15:55:45.871278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.871282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.871285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.686 [2024-12-09 15:55:45.871290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.686 [2024-12-09 15:55:45.871300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.686 [2024-12-09 15:55:45.871401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.686 [2024-12-09 15:55:45.871407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.686 [2024-12-09 15:55:45.871411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.871416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.686 [2024-12-09 15:55:45.871423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.871428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.871431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.686 [2024-12-09 15:55:45.871436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.686 [2024-12-09 15:55:45.871447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.686 [2024-12-09 15:55:45.871558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.686 [2024-12-09 15:55:45.871564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.686 [2024-12-09 15:55:45.871567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.871572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.686 [2024-12-09 15:55:45.871582] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.871585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.686 [2024-12-09 15:55:45.871588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.686 [2024-12-09 15:55:45.871594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.686 [2024-12-09 15:55:45.871603] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.686 [2024-12-09 15:55:45.871703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.686 [2024-12-09 15:55:45.871709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.686 [2024-12-09 15:55:45.871711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.871715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.687 [2024-12-09 15:55:45.871722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.871726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.871729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.687 [2024-12-09 15:55:45.871734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.687 [2024-12-09 15:55:45.871743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.687 [2024-12-09 15:55:45.871803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.687 [2024-12-09 15:55:45.871808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.687 [2024-12-09 15:55:45.871811] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.871816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.687 [2024-12-09 15:55:45.871825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.871828] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.871831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.687 [2024-12-09 15:55:45.871836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.687 [2024-12-09 15:55:45.871845] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.687 [2024-12-09 15:55:45.871905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.687 [2024-12-09 15:55:45.871910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.687 [2024-12-09 15:55:45.871913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.871916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.687 [2024-12-09 15:55:45.871924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.871927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.871930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.687 [2024-12-09 15:55:45.871936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.687 [2024-12-09 15:55:45.871945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.687 [2024-12-09 15:55:45.872004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.687 [2024-12-09 15:55:45.872010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.687 [2024-12-09 15:55:45.872013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.872016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.687 [2024-12-09 15:55:45.872023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.872027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.872030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.687 [2024-12-09 15:55:45.872035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.687 [2024-12-09 15:55:45.872044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.687 [2024-12-09 15:55:45.872107] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.687 [2024-12-09 15:55:45.872112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.687 [2024-12-09 15:55:45.872115] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.872118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.687 [2024-12-09 15:55:45.872126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.872129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.872132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.687 [2024-12-09 15:55:45.872137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.687 [2024-12-09 15:55:45.872147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.687 [2024-12-09 15:55:45.872211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.687 [2024-12-09 15:55:45.876222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.687 [2024-12-09 15:55:45.876228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.876231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.687 [2024-12-09 15:55:45.876244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.876247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.876250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a5690) 00:22:50.687 [2024-12-09 15:55:45.876256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.687 [2024-12-09 15:55:45.876267] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2407580, cid 3, qid 0 00:22:50.687 [2024-12-09 15:55:45.876417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.687 [2024-12-09 15:55:45.876423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.687 [2024-12-09 15:55:45.876426] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.687 [2024-12-09 15:55:45.876429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2407580) on tqpair=0x23a5690 00:22:50.687 [2024-12-09 15:55:45.876435] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:22:50.687 0% 00:22:50.687 Data Units Read: 0 00:22:50.687 Data Units Written: 0 00:22:50.687 Host Read Commands: 0 00:22:50.687 Host Write Commands: 0 00:22:50.687 Controller Busy Time: 0 minutes 00:22:50.687 Power Cycles: 0 00:22:50.687 Power On Hours: 0 hours 00:22:50.687 Unsafe Shutdowns: 0 00:22:50.687 Unrecoverable Media Errors: 0 00:22:50.687 Lifetime Error Log Entries: 0 00:22:50.687 Warning Temperature Time: 0 minutes 00:22:50.687 Critical Temperature Time: 0 minutes 00:22:50.687 00:22:50.687 Number of Queues 00:22:50.687 ================ 00:22:50.687 Number of I/O Submission Queues: 127 00:22:50.687 Number of I/O Completion Queues: 127 00:22:50.687 00:22:50.687 Active Namespaces 00:22:50.687 ================= 00:22:50.687 Namespace ID:1 00:22:50.687 Error Recovery Timeout: Unlimited 00:22:50.687 Command Set Identifier: NVM (00h) 00:22:50.687 Deallocate: Supported 00:22:50.687 Deallocated/Unwritten Error: Not Supported 00:22:50.687 Deallocated Read Value: Unknown 00:22:50.687 Deallocate in Write Zeroes: Not Supported 00:22:50.687 Deallocated Guard Field: 0xFFFF 00:22:50.687 Flush: Supported 00:22:50.687 Reservation: Supported 00:22:50.687 Namespace Sharing Capabilities: Multiple Controllers 00:22:50.687 Size (in LBAs): 131072 (0GiB) 00:22:50.687 Capacity (in LBAs): 131072 (0GiB) 00:22:50.687 Utilization (in LBAs): 131072 (0GiB) 00:22:50.687 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:50.687 EUI64: ABCDEF0123456789 00:22:50.687 UUID: bea4d07a-5710-42c0-93db-4eb6c03b9b3a 00:22:50.687 Thin Provisioning: Not Supported 00:22:50.687 Per-NS Atomic Units: Yes 00:22:50.687 Atomic Boundary Size (Normal): 0 00:22:50.687 Atomic Boundary Size (PFail): 0 00:22:50.687 Atomic Boundary Offset: 0 00:22:50.687 Maximum Single Source Range Length: 65535 00:22:50.687 Maximum Copy Length: 65535 00:22:50.687 Maximum Source Range Count: 1 00:22:50.687 NGUID/EUI64 Never Reused: No 00:22:50.687 Namespace Write Protected: No 00:22:50.687 Number of LBA Formats: 1 00:22:50.687 Current LBA Format: LBA Format #00 00:22:50.687 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:50.687 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:50.947 rmmod nvme_tcp 00:22:50.947 rmmod nvme_fabrics 00:22:50.947 rmmod nvme_keyring 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2079718 ']' 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2079718 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2079718 ']' 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2079718 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:50.947 15:55:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2079718 00:22:50.947 15:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:50.947 15:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:50.947 15:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2079718' 00:22:50.947 killing process with pid 2079718 00:22:50.947 15:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2079718 00:22:50.947 15:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2079718 00:22:51.206 15:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:51.206 15:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:51.206 15:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:51.206 15:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:51.206 15:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:51.206 15:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:51.206 15:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:51.206 15:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:51.206 15:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:51.206 15:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.206 15:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.206 15:55:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.110 15:55:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:53.110 00:22:53.110 real 0m9.334s 00:22:53.110 user 0m5.349s 00:22:53.110 sys 0m4.783s 00:22:53.111 15:55:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:53.111 15:55:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.111 ************************************ 00:22:53.111 END TEST nvmf_identify 00:22:53.111 ************************************ 00:22:53.111 15:55:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:53.111 15:55:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:53.111 15:55:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:53.111 15:55:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.371 ************************************ 00:22:53.371 START TEST nvmf_perf 00:22:53.371 ************************************ 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:53.371 * Looking for test storage... 00:22:53.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:53.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.371 --rc genhtml_branch_coverage=1 00:22:53.371 --rc genhtml_function_coverage=1 00:22:53.371 --rc genhtml_legend=1 00:22:53.371 --rc geninfo_all_blocks=1 00:22:53.371 --rc geninfo_unexecuted_blocks=1 00:22:53.371 00:22:53.371 ' 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:53.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.371 --rc genhtml_branch_coverage=1 00:22:53.371 --rc genhtml_function_coverage=1 00:22:53.371 --rc genhtml_legend=1 00:22:53.371 --rc geninfo_all_blocks=1 00:22:53.371 --rc geninfo_unexecuted_blocks=1 00:22:53.371 00:22:53.371 ' 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:53.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.371 --rc genhtml_branch_coverage=1 00:22:53.371 --rc genhtml_function_coverage=1 00:22:53.371 --rc genhtml_legend=1 00:22:53.371 --rc geninfo_all_blocks=1 00:22:53.371 --rc geninfo_unexecuted_blocks=1 00:22:53.371 00:22:53.371 ' 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:53.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.371 --rc genhtml_branch_coverage=1 00:22:53.371 --rc genhtml_function_coverage=1 00:22:53.371 --rc genhtml_legend=1 00:22:53.371 --rc geninfo_all_blocks=1 00:22:53.371 --rc geninfo_unexecuted_blocks=1 00:22:53.371 00:22:53.371 ' 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:53.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:53.371 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:53.372 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:53.372 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:53.372 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.372 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:53.372 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:53.372 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:53.372 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.372 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.372 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.372 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:53.372 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:53.372 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:53.372 15:55:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:59.946 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:59.946 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:59.946 Found net devices under 0000:af:00.0: cvl_0_0 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:59.946 Found net devices under 0000:af:00.1: cvl_0_1 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:59.946 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:59.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:22:59.947 00:22:59.947 --- 10.0.0.2 ping statistics --- 00:22:59.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.947 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:22:59.947 00:22:59.947 --- 10.0.0.1 ping statistics --- 00:22:59.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.947 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2083323 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2083323 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2083323 ']' 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:59.947 [2024-12-09 15:55:54.528537] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:22:59.947 [2024-12-09 15:55:54.528579] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.947 [2024-12-09 15:55:54.606739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:59.947 [2024-12-09 15:55:54.647223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.947 [2024-12-09 15:55:54.647260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.947 [2024-12-09 15:55:54.647268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.947 [2024-12-09 15:55:54.647274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.947 [2024-12-09 15:55:54.647280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.947 [2024-12-09 15:55:54.648803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.947 [2024-12-09 15:55:54.648932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.947 [2024-12-09 15:55:54.649051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.947 [2024-12-09 15:55:54.649052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:59.947 15:55:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:03.234 15:55:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:03.234 15:55:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:03.234 15:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:03.234 15:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:03.234 15:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:03.234 15:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:03.234 15:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:03.234 15:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:03.234 15:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:03.234 [2024-12-09 15:55:58.426527] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.234 15:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:03.493 15:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:03.493 15:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:03.752 15:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:03.752 15:55:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:04.011 15:55:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:04.011 [2024-12-09 15:55:59.218596] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.270 15:55:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:04.270 15:55:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:04.270 15:55:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:04.270 15:55:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:04.270 15:55:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:05.647 Initializing NVMe Controllers 00:23:05.647 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:05.647 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:05.647 Initialization complete. Launching workers. 00:23:05.647 ======================================================== 00:23:05.647 Latency(us) 00:23:05.647 Device Information : IOPS MiB/s Average min max 00:23:05.647 PCIE (0000:5e:00.0) NSID 1 from core 0: 99395.20 388.26 321.33 29.04 4566.14 00:23:05.647 ======================================================== 00:23:05.647 Total : 99395.20 388.26 321.33 29.04 4566.14 00:23:05.647 00:23:05.647 15:56:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:07.024 Initializing NVMe Controllers 00:23:07.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:07.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:07.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:07.024 Initialization complete. Launching workers. 00:23:07.024 ======================================================== 00:23:07.024 Latency(us) 00:23:07.024 Device Information : IOPS MiB/s Average min max 00:23:07.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 137.51 0.54 7512.48 105.40 44766.17 00:23:07.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 54.81 0.21 18812.12 7956.47 47886.79 00:23:07.024 ======================================================== 00:23:07.024 Total : 192.32 0.75 10732.58 105.40 47886.79 00:23:07.024 00:23:07.024 15:56:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:08.400 Initializing NVMe Controllers 00:23:08.400 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:08.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:08.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:08.400 Initialization complete. Launching workers. 00:23:08.400 ======================================================== 00:23:08.401 Latency(us) 00:23:08.401 Device Information : IOPS MiB/s Average min max 00:23:08.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11204.99 43.77 2864.15 385.77 6242.15 00:23:08.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3817.00 14.91 8417.54 6257.32 16030.20 00:23:08.401 ======================================================== 00:23:08.401 Total : 15021.98 58.68 4275.23 385.77 16030.20 00:23:08.401 00:23:08.401 15:56:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:08.401 15:56:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:08.401 15:56:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:10.934 Initializing NVMe Controllers 00:23:10.934 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:10.934 Controller IO queue size 128, less than required. 00:23:10.934 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.934 Controller IO queue size 128, less than required. 00:23:10.934 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:10.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:10.934 Initialization complete. Launching workers. 00:23:10.934 ======================================================== 00:23:10.934 Latency(us) 00:23:10.934 Device Information : IOPS MiB/s Average min max 00:23:10.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1806.46 451.61 72041.41 45158.21 134636.51 00:23:10.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 601.49 150.37 217395.04 65472.91 331287.98 00:23:10.934 ======================================================== 00:23:10.934 Total : 2407.95 601.99 108349.64 45158.21 331287.98 00:23:10.934 00:23:10.934 15:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:11.193 No valid NVMe controllers or AIO or URING devices found 00:23:11.193 Initializing NVMe Controllers 00:23:11.193 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:11.193 Controller IO queue size 128, less than required. 00:23:11.193 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.193 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:11.193 Controller IO queue size 128, less than required. 00:23:11.193 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.193 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:11.193 WARNING: Some requested NVMe devices were skipped 00:23:11.193 15:56:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:13.727 Initializing NVMe Controllers 00:23:13.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:13.727 Controller IO queue size 128, less than required. 00:23:13.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.727 Controller IO queue size 128, less than required. 00:23:13.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:13.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:13.727 Initialization complete. Launching workers. 00:23:13.727 00:23:13.727 ==================== 00:23:13.727 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:13.727 TCP transport: 00:23:13.727 polls: 10943 00:23:13.727 idle_polls: 7569 00:23:13.727 sock_completions: 3374 00:23:13.727 nvme_completions: 6455 00:23:13.727 submitted_requests: 9676 00:23:13.727 queued_requests: 1 00:23:13.727 00:23:13.727 ==================== 00:23:13.727 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:13.727 TCP transport: 00:23:13.727 polls: 15003 00:23:13.727 idle_polls: 11425 00:23:13.727 sock_completions: 3578 00:23:13.727 nvme_completions: 6785 00:23:13.727 submitted_requests: 10154 00:23:13.727 queued_requests: 1 00:23:13.727 ======================================================== 00:23:13.727 Latency(us) 00:23:13.727 Device Information : IOPS MiB/s Average min max 00:23:13.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1610.16 402.54 81057.80 56438.80 138251.29 00:23:13.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1692.49 423.12 76426.13 47381.49 125286.03 00:23:13.727 ======================================================== 00:23:13.727 Total : 3302.65 825.66 78684.24 47381.49 138251.29 00:23:13.727 00:23:13.727 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:13.727 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:13.727 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:13.727 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:13.727 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:13.727 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:13.727 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:13.727 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:13.727 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:13.727 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:13.727 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:13.727 rmmod nvme_tcp 00:23:13.727 rmmod nvme_fabrics 00:23:13.727 rmmod nvme_keyring 00:23:13.986 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:13.986 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:13.986 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:13.986 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2083323 ']' 00:23:13.986 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2083323 00:23:13.986 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2083323 ']' 00:23:13.986 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2083323 00:23:13.986 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:13.986 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.986 15:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2083323 00:23:13.986 15:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:13.986 15:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:13.986 15:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2083323' 00:23:13.986 killing process with pid 2083323 00:23:13.986 15:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2083323 00:23:13.986 15:56:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2083323 00:23:15.363 15:56:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:15.363 15:56:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:15.363 15:56:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:15.363 15:56:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:15.363 15:56:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:15.363 15:56:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:15.363 15:56:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:15.363 15:56:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:15.363 15:56:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:15.363 15:56:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.363 15:56:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.363 15:56:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:17.897 00:23:17.897 real 0m24.257s 00:23:17.897 user 1m3.181s 00:23:17.897 sys 0m8.293s 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:17.897 ************************************ 00:23:17.897 END TEST nvmf_perf 00:23:17.897 ************************************ 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.897 ************************************ 00:23:17.897 START TEST nvmf_fio_host 00:23:17.897 ************************************ 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:17.897 * Looking for test storage... 00:23:17.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.897 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:17.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.898 --rc genhtml_branch_coverage=1 00:23:17.898 --rc genhtml_function_coverage=1 00:23:17.898 --rc genhtml_legend=1 00:23:17.898 --rc geninfo_all_blocks=1 00:23:17.898 --rc geninfo_unexecuted_blocks=1 00:23:17.898 00:23:17.898 ' 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:17.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.898 --rc genhtml_branch_coverage=1 00:23:17.898 --rc genhtml_function_coverage=1 00:23:17.898 --rc genhtml_legend=1 00:23:17.898 --rc geninfo_all_blocks=1 00:23:17.898 --rc geninfo_unexecuted_blocks=1 00:23:17.898 00:23:17.898 ' 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:17.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.898 --rc genhtml_branch_coverage=1 00:23:17.898 --rc genhtml_function_coverage=1 00:23:17.898 --rc genhtml_legend=1 00:23:17.898 --rc geninfo_all_blocks=1 00:23:17.898 --rc geninfo_unexecuted_blocks=1 00:23:17.898 00:23:17.898 ' 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:17.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.898 --rc genhtml_branch_coverage=1 00:23:17.898 --rc genhtml_function_coverage=1 00:23:17.898 --rc genhtml_legend=1 00:23:17.898 --rc geninfo_all_blocks=1 00:23:17.898 --rc geninfo_unexecuted_blocks=1 00:23:17.898 00:23:17.898 ' 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:17.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.898 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.899 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:17.899 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:17.899 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:17.899 15:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:24.469 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:24.469 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:24.469 Found net devices under 0000:af:00.0: cvl_0_0 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:24.469 Found net devices under 0000:af:00.1: cvl_0_1 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:24.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:23:24.469 00:23:24.469 --- 10.0.0.2 ping statistics --- 00:23:24.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.469 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:23:24.469 00:23:24.469 --- 10.0.0.1 ping statistics --- 00:23:24.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.469 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2089365 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2089365 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2089365 ']' 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.469 15:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.469 [2024-12-09 15:56:18.881595] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:23:24.469 [2024-12-09 15:56:18.881643] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.469 [2024-12-09 15:56:18.958190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:24.469 [2024-12-09 15:56:18.999243] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.469 [2024-12-09 15:56:18.999279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.469 [2024-12-09 15:56:18.999287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.469 [2024-12-09 15:56:18.999293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.469 [2024-12-09 15:56:18.999298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.469 [2024-12-09 15:56:19.000856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.470 [2024-12-09 15:56:19.000963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.470 [2024-12-09 15:56:19.001071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.470 [2024-12-09 15:56:19.001072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:24.470 15:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.470 15:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:24.470 15:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:24.470 [2024-12-09 15:56:19.291419] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.470 15:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:24.470 15:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:24.470 15:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.470 15:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:24.470 Malloc1 00:23:24.470 15:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:24.728 15:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:24.987 15:56:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.987 [2024-12-09 15:56:20.132744] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.987 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:25.246 15:56:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:25.505 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:25.505 fio-3.35 00:23:25.505 Starting 1 thread 00:23:28.068 [2024-12-09 15:56:22.960319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf6b0 is same with the state(6) to be set 00:23:28.068 [2024-12-09 15:56:22.960375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf6b0 is same with the state(6) to be set 00:23:28.068 [2024-12-09 15:56:22.960384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf6b0 is same with the state(6) to be set 00:23:28.068 [2024-12-09 15:56:22.960391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf6b0 is same with the state(6) to be set 00:23:28.068 [2024-12-09 15:56:22.960397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf6b0 is same with the state(6) to be set 00:23:28.068 00:23:28.068 test: (groupid=0, jobs=1): err= 0: pid=2089948: Mon Dec 9 15:56:22 2024 00:23:28.068 read: IOPS=11.9k, BW=46.4MiB/s (48.7MB/s)(93.1MiB/2005msec) 00:23:28.068 slat (nsec): min=1539, max=259171, avg=1766.55, stdev=2360.70 00:23:28.068 clat (usec): min=3152, max=10576, avg=5950.77, stdev=464.68 00:23:28.068 lat (usec): min=3156, max=10578, avg=5952.54, stdev=464.62 00:23:28.068 clat percentiles (usec): 00:23:28.068 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:23:28.068 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:23:28.068 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:23:28.068 | 99.00th=[ 6915], 99.50th=[ 7111], 99.90th=[ 8455], 99.95th=[ 9372], 00:23:28.068 | 99.99th=[10159] 00:23:28.068 bw ( KiB/s): min=46752, max=48160, per=99.96%, avg=47538.00, stdev=609.63, samples=4 00:23:28.068 iops : min=11688, max=12040, avg=11884.50, stdev=152.41, samples=4 00:23:28.068 write: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(92.7MiB/2005msec); 0 zone resets 00:23:28.068 slat (nsec): min=1580, max=236281, avg=1816.60, stdev=1715.49 00:23:28.068 clat (usec): min=2531, max=9291, avg=4802.45, stdev=388.49 00:23:28.068 lat (usec): min=2545, max=9293, avg=4804.26, stdev=388.53 00:23:28.068 clat percentiles (usec): 00:23:28.068 | 1.00th=[ 3916], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:23:28.068 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:23:28.068 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:23:28.068 | 99.00th=[ 5604], 99.50th=[ 5866], 99.90th=[ 7570], 99.95th=[ 8848], 00:23:28.068 | 99.99th=[ 9110] 00:23:28.068 bw ( KiB/s): min=46976, max=47808, per=100.00%, avg=47344.00, stdev=348.10, samples=4 00:23:28.068 iops : min=11744, max=11952, avg=11836.00, stdev=87.02, samples=4 00:23:28.068 lat (msec) : 4=0.82%, 10=99.17%, 20=0.01% 00:23:28.068 cpu : usr=73.05%, sys=25.70%, ctx=94, majf=0, minf=2 00:23:28.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:28.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:28.068 issued rwts: total=23839,23729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:28.068 00:23:28.068 Run status group 0 (all jobs): 00:23:28.068 READ: bw=46.4MiB/s (48.7MB/s), 46.4MiB/s-46.4MiB/s (48.7MB/s-48.7MB/s), io=93.1MiB (97.6MB), run=2005-2005msec 00:23:28.068 WRITE: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=92.7MiB (97.2MB), run=2005-2005msec 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:28.068 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:28.069 15:56:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:28.326 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:28.326 fio-3.35 00:23:28.326 Starting 1 thread 00:23:30.855 00:23:30.855 test: (groupid=0, jobs=1): err= 0: pid=2090498: Mon Dec 9 15:56:25 2024 00:23:30.855 read: IOPS=10.8k, BW=169MiB/s (177MB/s)(339MiB/2006msec) 00:23:30.855 slat (nsec): min=2367, max=99158, avg=2822.35, stdev=1323.49 00:23:30.855 clat (usec): min=1960, max=14321, avg=6864.96, stdev=1626.87 00:23:30.855 lat (usec): min=1963, max=14336, avg=6867.78, stdev=1627.04 00:23:30.855 clat percentiles (usec): 00:23:30.855 | 1.00th=[ 3556], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5473], 00:23:30.855 | 30.00th=[ 5932], 40.00th=[ 6390], 50.00th=[ 6849], 60.00th=[ 7242], 00:23:30.855 | 70.00th=[ 7570], 80.00th=[ 8160], 90.00th=[ 8848], 95.00th=[ 9634], 00:23:30.855 | 99.00th=[11338], 99.50th=[11863], 99.90th=[12911], 99.95th=[13829], 00:23:30.855 | 99.99th=[14353] 00:23:30.855 bw ( KiB/s): min=83744, max=91456, per=50.20%, avg=86960.00, stdev=3269.90, samples=4 00:23:30.855 iops : min= 5234, max= 5716, avg=5435.00, stdev=204.37, samples=4 00:23:30.855 write: IOPS=6326, BW=98.9MiB/s (104MB/s)(177MiB/1789msec); 0 zone resets 00:23:30.855 slat (usec): min=28, max=379, avg=31.45, stdev= 7.86 00:23:30.855 clat (usec): min=2657, max=16328, avg=8797.62, stdev=1656.68 00:23:30.855 lat (usec): min=2693, max=16357, avg=8829.07, stdev=1658.19 00:23:30.855 clat percentiles (usec): 00:23:30.855 | 1.00th=[ 5538], 5.00th=[ 6390], 10.00th=[ 6915], 20.00th=[ 7439], 00:23:30.855 | 30.00th=[ 7832], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 8979], 00:23:30.855 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[11076], 95.00th=[11731], 00:23:30.855 | 99.00th=[13173], 99.50th=[13960], 99.90th=[15795], 99.95th=[16057], 00:23:30.855 | 99.99th=[16319] 00:23:30.855 bw ( KiB/s): min=86912, max=93952, per=89.45%, avg=90544.00, stdev=2878.04, samples=4 00:23:30.855 iops : min= 5432, max= 5872, avg=5659.00, stdev=179.88, samples=4 00:23:30.855 lat (msec) : 2=0.01%, 4=1.67%, 10=88.39%, 20=9.93% 00:23:30.855 cpu : usr=86.64%, sys=12.61%, ctx=43, majf=0, minf=2 00:23:30.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:30.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:30.855 issued rwts: total=21718,11318,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:30.855 00:23:30.855 Run status group 0 (all jobs): 00:23:30.855 READ: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=339MiB (356MB), run=2006-2006msec 00:23:30.855 WRITE: bw=98.9MiB/s (104MB/s), 98.9MiB/s-98.9MiB/s (104MB/s-104MB/s), io=177MiB (185MB), run=1789-1789msec 00:23:30.855 15:56:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:30.855 15:56:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:30.855 15:56:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:30.855 15:56:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:30.855 15:56:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:30.855 15:56:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:30.855 15:56:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:30.855 15:56:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:30.855 15:56:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:30.855 15:56:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.855 15:56:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:30.855 rmmod nvme_tcp 00:23:30.855 rmmod nvme_fabrics 00:23:30.855 rmmod nvme_keyring 00:23:30.855 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.855 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:30.855 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:30.855 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2089365 ']' 00:23:30.855 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2089365 00:23:30.855 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2089365 ']' 00:23:30.855 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2089365 00:23:30.855 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:30.855 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.855 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2089365 00:23:31.114 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:31.114 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:31.114 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2089365' 00:23:31.114 killing process with pid 2089365 00:23:31.114 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2089365 00:23:31.114 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2089365 00:23:31.114 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:31.115 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:31.115 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:31.115 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:31.115 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:31.115 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:31.115 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:31.115 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:31.115 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:31.115 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.115 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.115 15:56:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:33.651 00:23:33.651 real 0m15.660s 00:23:33.651 user 0m45.912s 00:23:33.651 sys 0m6.413s 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.651 ************************************ 00:23:33.651 END TEST nvmf_fio_host 00:23:33.651 ************************************ 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.651 ************************************ 00:23:33.651 START TEST nvmf_failover 00:23:33.651 ************************************ 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:33.651 * Looking for test storage... 00:23:33.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.651 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:33.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.652 --rc genhtml_branch_coverage=1 00:23:33.652 --rc genhtml_function_coverage=1 00:23:33.652 --rc genhtml_legend=1 00:23:33.652 --rc geninfo_all_blocks=1 00:23:33.652 --rc geninfo_unexecuted_blocks=1 00:23:33.652 00:23:33.652 ' 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:33.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.652 --rc genhtml_branch_coverage=1 00:23:33.652 --rc genhtml_function_coverage=1 00:23:33.652 --rc genhtml_legend=1 00:23:33.652 --rc geninfo_all_blocks=1 00:23:33.652 --rc geninfo_unexecuted_blocks=1 00:23:33.652 00:23:33.652 ' 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:33.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.652 --rc genhtml_branch_coverage=1 00:23:33.652 --rc genhtml_function_coverage=1 00:23:33.652 --rc genhtml_legend=1 00:23:33.652 --rc geninfo_all_blocks=1 00:23:33.652 --rc geninfo_unexecuted_blocks=1 00:23:33.652 00:23:33.652 ' 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:33.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.652 --rc genhtml_branch_coverage=1 00:23:33.652 --rc genhtml_function_coverage=1 00:23:33.652 --rc genhtml_legend=1 00:23:33.652 --rc geninfo_all_blocks=1 00:23:33.652 --rc geninfo_unexecuted_blocks=1 00:23:33.652 00:23:33.652 ' 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:33.652 15:56:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:40.222 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:40.222 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:40.222 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:40.223 Found net devices under 0000:af:00.0: cvl_0_0 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:40.223 Found net devices under 0000:af:00.1: cvl_0_1 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:40.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:23:40.223 00:23:40.223 --- 10.0.0.2 ping statistics --- 00:23:40.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.223 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:23:40.223 00:23:40.223 --- 10.0.0.1 ping statistics --- 00:23:40.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.223 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2094243 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2094243 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2094243 ']' 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:40.223 [2024-12-09 15:56:34.596454] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:23:40.223 [2024-12-09 15:56:34.596500] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.223 [2024-12-09 15:56:34.656452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:40.223 [2024-12-09 15:56:34.697789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.223 [2024-12-09 15:56:34.697818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.223 [2024-12-09 15:56:34.697825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.223 [2024-12-09 15:56:34.697832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.223 [2024-12-09 15:56:34.697837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.223 [2024-12-09 15:56:34.699133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.223 [2024-12-09 15:56:34.699250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.223 [2024-12-09 15:56:34.699251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.223 15:56:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:40.223 [2024-12-09 15:56:35.019841] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.223 15:56:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:40.223 Malloc0 00:23:40.223 15:56:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:40.223 15:56:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:40.481 15:56:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.739 [2024-12-09 15:56:35.809079] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.739 15:56:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:40.997 [2024-12-09 15:56:36.009631] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:40.997 15:56:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:41.255 [2024-12-09 15:56:36.226332] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:41.255 15:56:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2094629 00:23:41.255 15:56:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:41.255 15:56:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:41.255 15:56:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2094629 /var/tmp/bdevperf.sock 00:23:41.255 15:56:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2094629 ']' 00:23:41.255 15:56:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.255 15:56:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.255 15:56:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.255 15:56:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.255 15:56:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:41.512 15:56:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.513 15:56:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:41.513 15:56:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:41.770 NVMe0n1 00:23:41.770 15:56:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:42.028 00:23:42.028 15:56:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:42.028 15:56:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2094725 00:23:42.028 15:56:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:42.961 15:56:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.219 [2024-12-09 15:56:38.359962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.219 [2024-12-09 15:56:38.360007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.219 [2024-12-09 15:56:38.360014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.219 [2024-12-09 15:56:38.360021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.219 [2024-12-09 15:56:38.360027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.219 [2024-12-09 15:56:38.360034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 [2024-12-09 15:56:38.360361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2174760 is same with the state(6) to be set 00:23:43.220 15:56:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:46.499 15:56:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:46.757 00:23:46.757 15:56:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:47.015 [2024-12-09 15:56:42.039702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.015 [2024-12-09 15:56:42.039739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.015 [2024-12-09 15:56:42.039746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.015 [2024-12-09 15:56:42.039753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.015 [2024-12-09 15:56:42.039759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.015 [2024-12-09 15:56:42.039765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.015 [2024-12-09 15:56:42.039770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.039994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.040002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.040008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.040013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.040019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.040025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.040035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.040041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.040047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.040053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.040059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.040065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.040071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.040076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.040082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.040088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.040093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 [2024-12-09 15:56:42.040099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2175410 is same with the state(6) to be set 00:23:47.016 15:56:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:50.385 15:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.385 [2024-12-09 15:56:45.249089] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.385 15:56:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:51.317 15:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:51.317 [2024-12-09 15:56:46.464590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 [2024-12-09 15:56:46.464880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1980 is same with the state(6) to be set 00:23:51.317 15:56:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2094725 00:23:57.879 { 00:23:57.879 "results": [ 00:23:57.879 { 00:23:57.879 "job": "NVMe0n1", 00:23:57.879 "core_mask": "0x1", 00:23:57.879 "workload": "verify", 00:23:57.879 "status": "finished", 00:23:57.879 "verify_range": { 00:23:57.879 "start": 0, 00:23:57.879 "length": 16384 00:23:57.879 }, 00:23:57.879 "queue_depth": 128, 00:23:57.879 "io_size": 4096, 00:23:57.879 "runtime": 15.012804, 00:23:57.879 "iops": 11391.542845693582, 00:23:57.879 "mibps": 44.498214240990556, 00:23:57.879 "io_failed": 4317, 00:23:57.879 "io_timeout": 0, 00:23:57.879 "avg_latency_us": 10937.630260658718, 00:23:57.879 "min_latency_us": 415.45142857142855, 00:23:57.879 "max_latency_us": 21346.01142857143 00:23:57.879 } 00:23:57.879 ], 00:23:57.879 "core_count": 1 00:23:57.879 } 00:23:57.879 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2094629 00:23:57.879 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2094629 ']' 00:23:57.879 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2094629 00:23:57.879 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:57.879 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:57.879 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2094629 00:23:57.879 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:57.879 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:57.879 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2094629' 00:23:57.879 killing process with pid 2094629 00:23:57.879 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2094629 00:23:57.879 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2094629 00:23:57.879 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:57.879 [2024-12-09 15:56:36.305282] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:23:57.879 [2024-12-09 15:56:36.305339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2094629 ] 00:23:57.879 [2024-12-09 15:56:36.376698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.879 [2024-12-09 15:56:36.416724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.879 Running I/O for 15 seconds... 00:23:57.879 11528.00 IOPS, 45.03 MiB/s [2024-12-09T14:56:53.107Z] [2024-12-09 15:56:38.361071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.879 [2024-12-09 15:56:38.361107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.879 [2024-12-09 15:56:38.361122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.879 [2024-12-09 15:56:38.361130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.879 [2024-12-09 15:56:38.361139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.879 [2024-12-09 15:56:38.361146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.879 [2024-12-09 15:56:38.361155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.879 [2024-12-09 15:56:38.361162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.879 [2024-12-09 15:56:38.361169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.879 [2024-12-09 15:56:38.361176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.879 [2024-12-09 15:56:38.361185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.879 [2024-12-09 15:56:38.361193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.879 [2024-12-09 15:56:38.361201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.879 [2024-12-09 15:56:38.361207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.879 [2024-12-09 15:56:38.361215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.880 [2024-12-09 15:56:38.361758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.880 [2024-12-09 15:56:38.361764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.361772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.881 [2024-12-09 15:56:38.361779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.361787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.361793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.361801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.361808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.361816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.361823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.361830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.361838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.361846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.361852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.361860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.361867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.361875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.361882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.361889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.881 [2024-12-09 15:56:38.361896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.361904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.881 [2024-12-09 15:56:38.361910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.361918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.881 [2024-12-09 15:56:38.361925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.361933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.881 [2024-12-09 15:56:38.361939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.361946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.881 [2024-12-09 15:56:38.361953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.361961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.881 [2024-12-09 15:56:38.361967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.361975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.881 [2024-12-09 15:56:38.361981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.361989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.881 [2024-12-09 15:56:38.361996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.881 [2024-12-09 15:56:38.362016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.881 [2024-12-09 15:56:38.362031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.881 [2024-12-09 15:56:38.362305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.881 [2024-12-09 15:56:38.362312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.882 [2024-12-09 15:56:38.362842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.882 [2024-12-09 15:56:38.362850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.883 [2024-12-09 15:56:38.362857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:38.362864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.883 [2024-12-09 15:56:38.362871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:38.362879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.883 [2024-12-09 15:56:38.362885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:38.362892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.883 [2024-12-09 15:56:38.362899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:38.362907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.883 [2024-12-09 15:56:38.362913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:38.362921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.883 [2024-12-09 15:56:38.362927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:38.362937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.883 [2024-12-09 15:56:38.362943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:38.362951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.883 [2024-12-09 15:56:38.362957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:38.362964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.883 [2024-12-09 15:56:38.362971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:38.363005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.883 [2024-12-09 15:56:38.363011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.883 [2024-12-09 15:56:38.363017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102432 len:8 PRP1 0x0 PRP2 0x0 00:23:57.883 [2024-12-09 15:56:38.363023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:38.363067] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:57.883 [2024-12-09 15:56:38.363087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.883 [2024-12-09 15:56:38.363095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:38.363102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.883 [2024-12-09 15:56:38.363109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:38.363116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.883 [2024-12-09 15:56:38.363125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:38.363132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.883 [2024-12-09 15:56:38.363138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:38.363145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:57.883 [2024-12-09 15:56:38.363172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21788d0 (9): Bad file descriptor 00:23:57.883 [2024-12-09 15:56:38.365945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:57.883 [2024-12-09 15:56:38.394675] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:57.883 11249.50 IOPS, 43.94 MiB/s [2024-12-09T14:56:53.111Z] 11337.00 IOPS, 44.29 MiB/s [2024-12-09T14:56:53.111Z] 11357.00 IOPS, 44.36 MiB/s [2024-12-09T14:56:53.111Z] [2024-12-09 15:56:42.042013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:54520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:54536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.883 [2024-12-09 15:56:42.042336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:54632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.883 [2024-12-09 15:56:42.042342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.884 [2024-12-09 15:56:42.042458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.884 [2024-12-09 15:56:42.042738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.884 [2024-12-09 15:56:42.042745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.042760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.042774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.042788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.042802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.042818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.042832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.042846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.042860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.042874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.042887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.042902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.042915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.042930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.042945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.042959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.042973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.042990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.042998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.885 [2024-12-09 15:56:42.043262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.885 [2024-12-09 15:56:42.043270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.886 [2024-12-09 15:56:42.043276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.886 [2024-12-09 15:56:42.043290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.886 [2024-12-09 15:56:42.043305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.886 [2024-12-09 15:56:42.043318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.886 [2024-12-09 15:56:42.043332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.886 [2024-12-09 15:56:42.043346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.886 [2024-12-09 15:56:42.043361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.886 [2024-12-09 15:56:42.043375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.886 [2024-12-09 15:56:42.043390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.886 [2024-12-09 15:56:42.043420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55224 len:8 PRP1 0x0 PRP2 0x0 00:23:57.886 [2024-12-09 15:56:42.043426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.886 [2024-12-09 15:56:42.043439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.886 [2024-12-09 15:56:42.043445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55232 len:8 PRP1 0x0 PRP2 0x0 00:23:57.886 [2024-12-09 15:56:42.043451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.886 [2024-12-09 15:56:42.043463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.886 [2024-12-09 15:56:42.043469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55240 len:8 PRP1 0x0 PRP2 0x0 00:23:57.886 [2024-12-09 15:56:42.043475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.886 [2024-12-09 15:56:42.043487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.886 [2024-12-09 15:56:42.043492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55248 len:8 PRP1 0x0 PRP2 0x0 00:23:57.886 [2024-12-09 15:56:42.043498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.886 [2024-12-09 15:56:42.043509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.886 [2024-12-09 15:56:42.043515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55256 len:8 PRP1 0x0 PRP2 0x0 00:23:57.886 [2024-12-09 15:56:42.043520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.886 [2024-12-09 15:56:42.043531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.886 [2024-12-09 15:56:42.043537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55264 len:8 PRP1 0x0 PRP2 0x0 00:23:57.886 [2024-12-09 15:56:42.043543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.886 [2024-12-09 15:56:42.043556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.886 [2024-12-09 15:56:42.043561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55272 len:8 PRP1 0x0 PRP2 0x0 00:23:57.886 [2024-12-09 15:56:42.043567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.886 [2024-12-09 15:56:42.043578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.886 [2024-12-09 15:56:42.043583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55280 len:8 PRP1 0x0 PRP2 0x0 00:23:57.886 [2024-12-09 15:56:42.043589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.886 [2024-12-09 15:56:42.043601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.886 [2024-12-09 15:56:42.043607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55288 len:8 PRP1 0x0 PRP2 0x0 00:23:57.886 [2024-12-09 15:56:42.043612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.886 [2024-12-09 15:56:42.043623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.886 [2024-12-09 15:56:42.043628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55296 len:8 PRP1 0x0 PRP2 0x0 00:23:57.886 [2024-12-09 15:56:42.043635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.886 [2024-12-09 15:56:42.043647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.886 [2024-12-09 15:56:42.043652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55304 len:8 PRP1 0x0 PRP2 0x0 00:23:57.886 [2024-12-09 15:56:42.043658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.886 [2024-12-09 15:56:42.043670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.886 [2024-12-09 15:56:42.043675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55312 len:8 PRP1 0x0 PRP2 0x0 00:23:57.886 [2024-12-09 15:56:42.043680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.886 [2024-12-09 15:56:42.043692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.886 [2024-12-09 15:56:42.043697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55320 len:8 PRP1 0x0 PRP2 0x0 00:23:57.886 [2024-12-09 15:56:42.043703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.886 [2024-12-09 15:56:42.043715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.886 [2024-12-09 15:56:42.043720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55328 len:8 PRP1 0x0 PRP2 0x0 00:23:57.886 [2024-12-09 15:56:42.043727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.886 [2024-12-09 15:56:42.043739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.886 [2024-12-09 15:56:42.043744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55336 len:8 PRP1 0x0 PRP2 0x0 00:23:57.886 [2024-12-09 15:56:42.043750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.886 [2024-12-09 15:56:42.043761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.886 [2024-12-09 15:56:42.043766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55344 len:8 PRP1 0x0 PRP2 0x0 00:23:57.886 [2024-12-09 15:56:42.043772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.886 [2024-12-09 15:56:42.043778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.886 [2024-12-09 15:56:42.043787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.886 [2024-12-09 15:56:42.043792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55352 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.043798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.043805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.043809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.043814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55360 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.043820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.043828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.043833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.043839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55368 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.043845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.043851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.043857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.043862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55376 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.043868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.043875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.043879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.043884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55384 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.043890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.043897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.043902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.043908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55392 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.043915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.043921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.043926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.043931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55400 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.043937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.043944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.043948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.043954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55408 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.053510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.053524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.053532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.053539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55416 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.053547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.053555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.053561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.053567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55424 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.053574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.053583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.053589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.053595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55432 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.053603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.053611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.053617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.053623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55440 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.053631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.053638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.053644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.053651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55448 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.053659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.053667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.053677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.053684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55456 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.053691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.053699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.053705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.053712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55464 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.053720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.053727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.053733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.053739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55472 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.053747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.053755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.053761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.053768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55480 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.053775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.053783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.053789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.053795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55488 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.053803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.053811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.887 [2024-12-09 15:56:42.053817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.887 [2024-12-09 15:56:42.053823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55496 len:8 PRP1 0x0 PRP2 0x0 00:23:57.887 [2024-12-09 15:56:42.053831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.053877] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:57.887 [2024-12-09 15:56:42.053903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.887 [2024-12-09 15:56:42.053912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.053921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.887 [2024-12-09 15:56:42.053929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.053938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.887 [2024-12-09 15:56:42.053948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.053957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.887 [2024-12-09 15:56:42.053965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.887 [2024-12-09 15:56:42.053973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:57.887 [2024-12-09 15:56:42.054007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21788d0 (9): Bad file descriptor 00:23:57.887 [2024-12-09 15:56:42.057464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:57.887 [2024-12-09 15:56:42.082747] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:57.887 11279.80 IOPS, 44.06 MiB/s [2024-12-09T14:56:53.116Z] 11341.17 IOPS, 44.30 MiB/s [2024-12-09T14:56:53.116Z] 11386.71 IOPS, 44.48 MiB/s [2024-12-09T14:56:53.116Z] 11400.12 IOPS, 44.53 MiB/s [2024-12-09T14:56:53.116Z] 11430.78 IOPS, 44.65 MiB/s [2024-12-09T14:56:53.116Z] [2024-12-09 15:56:46.465782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.465815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.465829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.465837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.465846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.465853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.465861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.465868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.465876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.465883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.465891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.465897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.465905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.465912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.465920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.465931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.465940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.465946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.465958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.465965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.465973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.465979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.465987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.465994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.888 [2024-12-09 15:56:46.466050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.888 [2024-12-09 15:56:46.466343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.888 [2024-12-09 15:56:46.466352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.889 [2024-12-09 15:56:46.466358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.889 [2024-12-09 15:56:46.466373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.889 [2024-12-09 15:56:46.466387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.889 [2024-12-09 15:56:46.466402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.889 [2024-12-09 15:56:46.466416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.889 [2024-12-09 15:56:46.466430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.889 [2024-12-09 15:56:46.466445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.889 [2024-12-09 15:56:46.466459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.889 [2024-12-09 15:56:46.466473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.889 [2024-12-09 15:56:46.466487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.889 [2024-12-09 15:56:46.466501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.889 [2024-12-09 15:56:46.466803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.889 [2024-12-09 15:56:46.466811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.466817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.466825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.466831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.466839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.466845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.466853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.466860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.466869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.466875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.466883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.466889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.466897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.466904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.466912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.466918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.466926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.466932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.466940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.466946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.466954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.466961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.466969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.466975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.466983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.466990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.466998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.467005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.467019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.467034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.467053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.467067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.467081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.467096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.467110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.467124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.467138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.467153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.467166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.467180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.890 [2024-12-09 15:56:46.467194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.890 [2024-12-09 15:56:46.467240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77960 len:8 PRP1 0x0 PRP2 0x0 00:23:57.890 [2024-12-09 15:56:46.467247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.890 [2024-12-09 15:56:46.467261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.890 [2024-12-09 15:56:46.467266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77968 len:8 PRP1 0x0 PRP2 0x0 00:23:57.890 [2024-12-09 15:56:46.467274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.890 [2024-12-09 15:56:46.467286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.890 [2024-12-09 15:56:46.467292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77976 len:8 PRP1 0x0 PRP2 0x0 00:23:57.890 [2024-12-09 15:56:46.467299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.890 [2024-12-09 15:56:46.467310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.890 [2024-12-09 15:56:46.467316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77984 len:8 PRP1 0x0 PRP2 0x0 00:23:57.890 [2024-12-09 15:56:46.467322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.890 [2024-12-09 15:56:46.467334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.890 [2024-12-09 15:56:46.467339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77992 len:8 PRP1 0x0 PRP2 0x0 00:23:57.890 [2024-12-09 15:56:46.467345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.890 [2024-12-09 15:56:46.467356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.890 [2024-12-09 15:56:46.467362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78000 len:8 PRP1 0x0 PRP2 0x0 00:23:57.890 [2024-12-09 15:56:46.467368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.890 [2024-12-09 15:56:46.467374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.890 [2024-12-09 15:56:46.467379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.890 [2024-12-09 15:56:46.467385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78008 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.467391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.467397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.467402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.467407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78016 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.467414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.467420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.467426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.467431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78024 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.467437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.467443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.467448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.467454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78032 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.467460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.467467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.467471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.467477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78040 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.467483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.467490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.467494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.467500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78048 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.467506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.467512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.467517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.467522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78056 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.467529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.467535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.467540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.467545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78064 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.467551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.467558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.467563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.467569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78072 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.467575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.467582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.467587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.467592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78080 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.467598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.467605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.467615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.467621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78088 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.467627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.467635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.467641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.467646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78096 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.467652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.467658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.467664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.467669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78104 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.467675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.467682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.467687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.467692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78112 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.467699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.467705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.467710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.467716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78120 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.467722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.467728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.467733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.467739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78128 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.467746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.467753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.467757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.467763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78136 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.478826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.478853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.478861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.478868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78144 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.478875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.478881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.478887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.478893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78152 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.478904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.478910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.478915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.478920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78160 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.478926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.478932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.478937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.478943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78168 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.478949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.478955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.891 [2024-12-09 15:56:46.478960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.891 [2024-12-09 15:56:46.478965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78176 len:8 PRP1 0x0 PRP2 0x0 00:23:57.891 [2024-12-09 15:56:46.478971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.891 [2024-12-09 15:56:46.478977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.892 [2024-12-09 15:56:46.478982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.892 [2024-12-09 15:56:46.478987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78184 len:8 PRP1 0x0 PRP2 0x0 00:23:57.892 [2024-12-09 15:56:46.478993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.892 [2024-12-09 15:56:46.478999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.892 [2024-12-09 15:56:46.479004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.892 [2024-12-09 15:56:46.479009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78192 len:8 PRP1 0x0 PRP2 0x0 00:23:57.892 [2024-12-09 15:56:46.479016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.892 [2024-12-09 15:56:46.479023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.892 [2024-12-09 15:56:46.479027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.892 [2024-12-09 15:56:46.479032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78200 len:8 PRP1 0x0 PRP2 0x0 00:23:57.892 [2024-12-09 15:56:46.479038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.892 [2024-12-09 15:56:46.479044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.892 [2024-12-09 15:56:46.479049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.892 [2024-12-09 15:56:46.479054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77552 len:8 PRP1 0x0 PRP2 0x0 00:23:57.892 [2024-12-09 15:56:46.479061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.892 [2024-12-09 15:56:46.479067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.892 [2024-12-09 15:56:46.479072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.892 [2024-12-09 15:56:46.479079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77560 len:8 PRP1 0x0 PRP2 0x0 00:23:57.892 [2024-12-09 15:56:46.479085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.892 [2024-12-09 15:56:46.479129] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:57.892 [2024-12-09 15:56:46.479157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.892 [2024-12-09 15:56:46.479166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.892 [2024-12-09 15:56:46.479175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.892 [2024-12-09 15:56:46.479181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.892 [2024-12-09 15:56:46.479188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.892 [2024-12-09 15:56:46.479194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.892 [2024-12-09 15:56:46.479201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.892 [2024-12-09 15:56:46.479207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.892 [2024-12-09 15:56:46.479214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:57.892 [2024-12-09 15:56:46.479244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21788d0 (9): Bad file descriptor 00:23:57.892 [2024-12-09 15:56:46.482583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:57.892 [2024-12-09 15:56:46.511944] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:57.892 11395.00 IOPS, 44.51 MiB/s [2024-12-09T14:56:53.120Z] 11393.36 IOPS, 44.51 MiB/s [2024-12-09T14:56:53.120Z] 11388.92 IOPS, 44.49 MiB/s [2024-12-09T14:56:53.120Z] 11383.85 IOPS, 44.47 MiB/s [2024-12-09T14:56:53.120Z] 11381.29 IOPS, 44.46 MiB/s [2024-12-09T14:56:53.120Z] 11392.73 IOPS, 44.50 MiB/s 00:23:57.892 Latency(us) 00:23:57.892 [2024-12-09T14:56:53.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.892 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:57.892 Verification LBA range: start 0x0 length 0x4000 00:23:57.892 NVMe0n1 : 15.01 11391.54 44.50 287.55 0.00 10937.63 415.45 21346.01 00:23:57.892 [2024-12-09T14:56:53.120Z] =================================================================================================================== 00:23:57.892 [2024-12-09T14:56:53.120Z] Total : 11391.54 44.50 287.55 0.00 10937.63 415.45 21346.01 00:23:57.892 Received shutdown signal, test time was about 15.000000 seconds 00:23:57.892 00:23:57.892 Latency(us) 00:23:57.892 [2024-12-09T14:56:53.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.892 [2024-12-09T14:56:53.120Z] =================================================================================================================== 00:23:57.892 [2024-12-09T14:56:53.120Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:57.892 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:57.892 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:57.892 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:57.892 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2097218 00:23:57.892 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:57.892 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2097218 /var/tmp/bdevperf.sock 00:23:57.892 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2097218 ']' 00:23:57.892 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.892 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.892 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.892 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.892 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:57.892 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.892 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:57.892 15:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:57.892 [2024-12-09 15:56:52.985984] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:57.892 15:56:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:58.150 [2024-12-09 15:56:53.190554] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:58.150 15:56:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:58.407 NVMe0n1 00:23:58.407 15:56:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:58.664 00:23:58.921 15:56:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:58.921 00:23:59.179 15:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:59.179 15:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:59.179 15:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:59.436 15:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:02.713 15:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:02.713 15:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:02.713 15:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:02.713 15:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2098130 00:24:02.713 15:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2098130 00:24:04.087 { 00:24:04.087 "results": [ 00:24:04.087 { 00:24:04.087 "job": "NVMe0n1", 00:24:04.087 "core_mask": "0x1", 00:24:04.087 "workload": "verify", 00:24:04.087 "status": "finished", 00:24:04.087 "verify_range": { 00:24:04.087 "start": 0, 00:24:04.087 "length": 16384 00:24:04.087 }, 00:24:04.087 "queue_depth": 128, 00:24:04.087 "io_size": 4096, 00:24:04.087 "runtime": 1.006718, 00:24:04.087 "iops": 11545.437749200868, 00:24:04.087 "mibps": 45.09936620781589, 00:24:04.087 "io_failed": 0, 00:24:04.087 "io_timeout": 0, 00:24:04.087 "avg_latency_us": 11032.30178717895, 00:24:04.087 "min_latency_us": 1825.6457142857143, 00:24:04.087 "max_latency_us": 11172.327619047619 00:24:04.087 } 00:24:04.087 ], 00:24:04.087 "core_count": 1 00:24:04.087 } 00:24:04.087 15:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:04.087 [2024-12-09 15:56:52.596118] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:24:04.087 [2024-12-09 15:56:52.596171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2097218 ] 00:24:04.087 [2024-12-09 15:56:52.668105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.087 [2024-12-09 15:56:52.704933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.087 [2024-12-09 15:56:54.564164] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:04.087 [2024-12-09 15:56:54.564207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.087 [2024-12-09 15:56:54.564223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.087 [2024-12-09 15:56:54.564232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.088 [2024-12-09 15:56:54.564239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.088 [2024-12-09 15:56:54.564247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.088 [2024-12-09 15:56:54.564254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.088 [2024-12-09 15:56:54.564261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.088 [2024-12-09 15:56:54.564267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.088 [2024-12-09 15:56:54.564273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:04.088 [2024-12-09 15:56:54.564297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:04.088 [2024-12-09 15:56:54.564311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214c8d0 (9): Bad file descriptor 00:24:04.088 [2024-12-09 15:56:54.666357] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:04.088 Running I/O for 1 seconds... 00:24:04.088 11479.00 IOPS, 44.84 MiB/s 00:24:04.088 Latency(us) 00:24:04.088 [2024-12-09T14:56:59.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.088 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:04.088 Verification LBA range: start 0x0 length 0x4000 00:24:04.088 NVMe0n1 : 1.01 11545.44 45.10 0.00 0.00 11032.30 1825.65 11172.33 00:24:04.088 [2024-12-09T14:56:59.316Z] =================================================================================================================== 00:24:04.088 [2024-12-09T14:56:59.316Z] Total : 11545.44 45.10 0.00 0.00 11032.30 1825.65 11172.33 00:24:04.088 15:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:04.088 15:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:04.088 15:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:04.088 15:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:04.088 15:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:04.345 15:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:04.603 15:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:07.880 15:57:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:07.880 15:57:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:07.881 15:57:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2097218 00:24:07.881 15:57:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2097218 ']' 00:24:07.881 15:57:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2097218 00:24:07.881 15:57:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:07.881 15:57:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.881 15:57:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2097218 00:24:07.881 15:57:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:07.881 15:57:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:07.881 15:57:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2097218' 00:24:07.881 killing process with pid 2097218 00:24:07.881 15:57:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2097218 00:24:07.881 15:57:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2097218 00:24:07.881 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:07.881 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:08.138 rmmod nvme_tcp 00:24:08.138 rmmod nvme_fabrics 00:24:08.138 rmmod nvme_keyring 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2094243 ']' 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2094243 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2094243 ']' 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2094243 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.138 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2094243 00:24:08.397 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:08.397 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:08.397 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2094243' 00:24:08.397 killing process with pid 2094243 00:24:08.397 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2094243 00:24:08.397 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2094243 00:24:08.397 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:08.397 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:08.397 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:08.397 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:08.397 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:08.397 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:08.397 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:08.397 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:08.397 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:08.397 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.397 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.397 15:57:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:10.932 00:24:10.932 real 0m37.249s 00:24:10.932 user 1m57.845s 00:24:10.932 sys 0m7.915s 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:10.932 ************************************ 00:24:10.932 END TEST nvmf_failover 00:24:10.932 ************************************ 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.932 ************************************ 00:24:10.932 START TEST nvmf_host_discovery 00:24:10.932 ************************************ 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:10.932 * Looking for test storage... 00:24:10.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.932 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:10.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.933 --rc genhtml_branch_coverage=1 00:24:10.933 --rc genhtml_function_coverage=1 00:24:10.933 --rc genhtml_legend=1 00:24:10.933 --rc geninfo_all_blocks=1 00:24:10.933 --rc geninfo_unexecuted_blocks=1 00:24:10.933 00:24:10.933 ' 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:10.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.933 --rc genhtml_branch_coverage=1 00:24:10.933 --rc genhtml_function_coverage=1 00:24:10.933 --rc genhtml_legend=1 00:24:10.933 --rc geninfo_all_blocks=1 00:24:10.933 --rc geninfo_unexecuted_blocks=1 00:24:10.933 00:24:10.933 ' 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:10.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.933 --rc genhtml_branch_coverage=1 00:24:10.933 --rc genhtml_function_coverage=1 00:24:10.933 --rc genhtml_legend=1 00:24:10.933 --rc geninfo_all_blocks=1 00:24:10.933 --rc geninfo_unexecuted_blocks=1 00:24:10.933 00:24:10.933 ' 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:10.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.933 --rc genhtml_branch_coverage=1 00:24:10.933 --rc genhtml_function_coverage=1 00:24:10.933 --rc genhtml_legend=1 00:24:10.933 --rc geninfo_all_blocks=1 00:24:10.933 --rc geninfo_unexecuted_blocks=1 00:24:10.933 00:24:10.933 ' 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:10.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:10.933 15:57:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:17.508 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:17.509 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:17.509 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:17.509 Found net devices under 0000:af:00.0: cvl_0_0 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:17.509 Found net devices under 0000:af:00.1: cvl_0_1 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:17.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:24:17.509 00:24:17.509 --- 10.0.0.2 ping statistics --- 00:24:17.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.509 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:24:17.509 00:24:17.509 --- 10.0.0.1 ping statistics --- 00:24:17.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.509 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2103043 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2103043 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2103043 ']' 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.509 15:57:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.509 [2024-12-09 15:57:11.890757] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:24:17.509 [2024-12-09 15:57:11.890800] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.509 [2024-12-09 15:57:11.969084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.509 [2024-12-09 15:57:12.007569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.509 [2024-12-09 15:57:12.007608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.509 [2024-12-09 15:57:12.007615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.509 [2024-12-09 15:57:12.007621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.509 [2024-12-09 15:57:12.007626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.509 [2024-12-09 15:57:12.008157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.509 [2024-12-09 15:57:12.150141] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.509 [2024-12-09 15:57:12.162334] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.509 null0 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.509 null1 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2103065 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2103065 /tmp/host.sock 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2103065 ']' 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:17.509 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.509 [2024-12-09 15:57:12.240439] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:24:17.509 [2024-12-09 15:57:12.240478] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2103065 ] 00:24:17.509 [2024-12-09 15:57:12.313423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.509 [2024-12-09 15:57:12.352275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:17.509 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.510 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.769 [2024-12-09 15:57:12.783915] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:17.769 15:57:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:18.336 [2024-12-09 15:57:13.477551] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:18.336 [2024-12-09 15:57:13.477568] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:18.336 [2024-12-09 15:57:13.477579] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:18.336 [2024-12-09 15:57:13.563832] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:18.595 [2024-12-09 15:57:13.658546] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:18.595 [2024-12-09 15:57:13.659289] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c18260:1 started. 00:24:18.595 [2024-12-09 15:57:13.660637] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:18.595 [2024-12-09 15:57:13.660652] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:18.595 [2024-12-09 15:57:13.666672] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c18260 was disconnected and freed. delete nvme_qpair. 00:24:18.853 15:57:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:18.853 15:57:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:18.853 15:57:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:18.853 15:57:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.853 15:57:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.853 15:57:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.853 15:57:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.853 15:57:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.853 15:57:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.853 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.853 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.853 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:18.853 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:18.853 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:18.853 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:18.853 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:18.853 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:18.853 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:18.853 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:18.853 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.853 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.853 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:18.854 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.854 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:18.854 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.854 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:18.854 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:18.854 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:18.854 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:18.854 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:18.854 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:18.854 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.113 [2024-12-09 15:57:14.171453] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c18440:1 started. 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.113 [2024-12-09 15:57:14.178037] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c18440 was disconnected and freed. delete nvme_qpair. 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.113 [2024-12-09 15:57:14.271874] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:19.113 [2024-12-09 15:57:14.272300] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:19.113 [2024-12-09 15:57:14.272318] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:19.113 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.114 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.373 [2024-12-09 15:57:14.358572] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:19.373 15:57:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:19.373 [2024-12-09 15:57:14.423049] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:19.373 [2024-12-09 15:57:14.423083] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:19.373 [2024-12-09 15:57:14.423091] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:19.373 [2024-12-09 15:57:14.423096] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.309 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.310 [2024-12-09 15:57:15.524182] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:20.310 [2024-12-09 15:57:15.524202] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:20.310 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.310 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:20.310 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:20.310 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:20.310 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:20.310 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:20.310 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:20.310 [2024-12-09 15:57:15.533241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.310 [2024-12-09 15:57:15.533258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.310 [2024-12-09 15:57:15.533267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.310 [2024-12-09 15:57:15.533275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.310 [2024-12-09 15:57:15.533283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.310 [2024-12-09 15:57:15.533289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.310 [2024-12-09 15:57:15.533297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.310 [2024-12-09 15:57:15.533303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.310 [2024-12-09 15:57:15.533310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8710 is same with the state(6) to be set 00:24:20.310 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:20.310 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:20.310 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.310 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:20.310 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.310 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:20.570 [2024-12-09 15:57:15.543255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be8710 (9): Bad file descriptor 00:24:20.570 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.570 [2024-12-09 15:57:15.553289] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:20.570 [2024-12-09 15:57:15.553301] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:20.570 [2024-12-09 15:57:15.553308] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:20.570 [2024-12-09 15:57:15.553312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:20.570 [2024-12-09 15:57:15.553328] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:20.570 [2024-12-09 15:57:15.553448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-12-09 15:57:15.553461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be8710 with addr=10.0.0.2, port=4420 00:24:20.570 [2024-12-09 15:57:15.553469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8710 is same with the state(6) to be set 00:24:20.570 [2024-12-09 15:57:15.553480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be8710 (9): Bad file descriptor 00:24:20.570 [2024-12-09 15:57:15.553490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:20.570 [2024-12-09 15:57:15.553496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:20.570 [2024-12-09 15:57:15.553504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:20.570 [2024-12-09 15:57:15.553510] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:20.570 [2024-12-09 15:57:15.553515] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:20.570 [2024-12-09 15:57:15.553519] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:20.570 [2024-12-09 15:57:15.563358] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:20.570 [2024-12-09 15:57:15.563368] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:20.570 [2024-12-09 15:57:15.563372] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:20.570 [2024-12-09 15:57:15.563375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:20.570 [2024-12-09 15:57:15.563388] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:20.570 [2024-12-09 15:57:15.563576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-12-09 15:57:15.563588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be8710 with addr=10.0.0.2, port=4420 00:24:20.570 [2024-12-09 15:57:15.563595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8710 is same with the state(6) to be set 00:24:20.570 [2024-12-09 15:57:15.563605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be8710 (9): Bad file descriptor 00:24:20.570 [2024-12-09 15:57:15.563618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:20.570 [2024-12-09 15:57:15.563625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:20.570 [2024-12-09 15:57:15.563631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:20.570 [2024-12-09 15:57:15.563636] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:20.570 [2024-12-09 15:57:15.563641] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:20.570 [2024-12-09 15:57:15.563645] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:20.570 [2024-12-09 15:57:15.573419] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:20.570 [2024-12-09 15:57:15.573433] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:20.570 [2024-12-09 15:57:15.573437] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:20.570 [2024-12-09 15:57:15.573441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:20.570 [2024-12-09 15:57:15.573456] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:20.570 [2024-12-09 15:57:15.573635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-12-09 15:57:15.573653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be8710 with addr=10.0.0.2, port=4420 00:24:20.570 [2024-12-09 15:57:15.573661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8710 is same with the state(6) to be set 00:24:20.570 [2024-12-09 15:57:15.573671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be8710 (9): Bad file descriptor 00:24:20.570 [2024-12-09 15:57:15.573681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:20.570 [2024-12-09 15:57:15.573687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:20.570 [2024-12-09 15:57:15.573693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:20.570 [2024-12-09 15:57:15.573699] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:20.570 [2024-12-09 15:57:15.573703] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:20.570 [2024-12-09 15:57:15.573707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:20.570 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.570 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:20.570 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:20.570 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:20.570 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:20.570 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:20.570 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:20.570 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:20.570 [2024-12-09 15:57:15.583486] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:20.570 [2024-12-09 15:57:15.583498] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:20.570 [2024-12-09 15:57:15.583508] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:20.570 [2024-12-09 15:57:15.583512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:20.570 [2024-12-09 15:57:15.583524] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:20.570 [2024-12-09 15:57:15.583709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.570 [2024-12-09 15:57:15.583721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be8710 with addr=10.0.0.2, port=4420 00:24:20.570 [2024-12-09 15:57:15.583728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8710 is same with the state(6) to be set 00:24:20.570 [2024-12-09 15:57:15.583739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be8710 (9): Bad file descriptor 00:24:20.570 [2024-12-09 15:57:15.583749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:20.570 [2024-12-09 15:57:15.583754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:20.570 [2024-12-09 15:57:15.583761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:20.570 [2024-12-09 15:57:15.583766] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:20.570 [2024-12-09 15:57:15.583770] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:20.571 [2024-12-09 15:57:15.583774] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:20.571 [2024-12-09 15:57:15.593556] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:20.571 [2024-12-09 15:57:15.593571] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:20.571 [2024-12-09 15:57:15.593575] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:20.571 [2024-12-09 15:57:15.593578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:20.571 [2024-12-09 15:57:15.593592] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:20.571 [2024-12-09 15:57:15.593702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-12-09 15:57:15.593713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be8710 with addr=10.0.0.2, port=4420 00:24:20.571 [2024-12-09 15:57:15.593720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8710 is same with the state(6) to be set 00:24:20.571 [2024-12-09 15:57:15.593731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be8710 (9): Bad file descriptor 00:24:20.571 [2024-12-09 15:57:15.593740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:20.571 [2024-12-09 15:57:15.593746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:20.571 [2024-12-09 15:57:15.593753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:20.571 [2024-12-09 15:57:15.593762] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:20.571 [2024-12-09 15:57:15.593766] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:20.571 [2024-12-09 15:57:15.593770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:20.571 [2024-12-09 15:57:15.603622] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:20.571 [2024-12-09 15:57:15.603632] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:20.571 [2024-12-09 15:57:15.603636] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:20.571 [2024-12-09 15:57:15.603640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:20.571 [2024-12-09 15:57:15.603652] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:20.571 [2024-12-09 15:57:15.603857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.571 [2024-12-09 15:57:15.603868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be8710 with addr=10.0.0.2, port=4420 00:24:20.571 [2024-12-09 15:57:15.603876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be8710 is same with the state(6) to be set 00:24:20.571 [2024-12-09 15:57:15.603885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be8710 (9): Bad file descriptor 00:24:20.571 [2024-12-09 15:57:15.603895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:20.571 [2024-12-09 15:57:15.603901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:20.571 [2024-12-09 15:57:15.603907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:20.571 [2024-12-09 15:57:15.603913] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:20.571 [2024-12-09 15:57:15.603917] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:20.571 [2024-12-09 15:57:15.603921] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:20.571 [2024-12-09 15:57:15.611441] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:20.571 [2024-12-09 15:57:15.611456] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:20.571 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.830 15:57:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.767 [2024-12-09 15:57:16.946384] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:21.767 [2024-12-09 15:57:16.946400] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:21.767 [2024-12-09 15:57:16.946412] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:22.025 [2024-12-09 15:57:17.032660] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:22.026 [2024-12-09 15:57:17.252773] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:22.026 [2024-12-09 15:57:17.253353] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1be5210:1 started. 00:24:22.285 [2024-12-09 15:57:17.254948] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:22.285 [2024-12-09 15:57:17.254972] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.285 [2024-12-09 15:57:17.256338] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1be5210 was disconnected and freed. delete nvme_qpair. 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.285 request: 00:24:22.285 { 00:24:22.285 "name": "nvme", 00:24:22.285 "trtype": "tcp", 00:24:22.285 "traddr": "10.0.0.2", 00:24:22.285 "adrfam": "ipv4", 00:24:22.285 "trsvcid": "8009", 00:24:22.285 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:22.285 "wait_for_attach": true, 00:24:22.285 "method": "bdev_nvme_start_discovery", 00:24:22.285 "req_id": 1 00:24:22.285 } 00:24:22.285 Got JSON-RPC error response 00:24:22.285 response: 00:24:22.285 { 00:24:22.285 "code": -17, 00:24:22.285 "message": "File exists" 00:24:22.285 } 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:22.285 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.286 request: 00:24:22.286 { 00:24:22.286 "name": "nvme_second", 00:24:22.286 "trtype": "tcp", 00:24:22.286 "traddr": "10.0.0.2", 00:24:22.286 "adrfam": "ipv4", 00:24:22.286 "trsvcid": "8009", 00:24:22.286 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:22.286 "wait_for_attach": true, 00:24:22.286 "method": "bdev_nvme_start_discovery", 00:24:22.286 "req_id": 1 00:24:22.286 } 00:24:22.286 Got JSON-RPC error response 00:24:22.286 response: 00:24:22.286 { 00:24:22.286 "code": -17, 00:24:22.286 "message": "File exists" 00:24:22.286 } 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.286 15:57:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.663 [2024-12-09 15:57:18.494296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.663 [2024-12-09 15:57:18.494321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bf7530 with addr=10.0.0.2, port=8010 00:24:23.663 [2024-12-09 15:57:18.494333] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:23.663 [2024-12-09 15:57:18.494340] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:23.663 [2024-12-09 15:57:18.494346] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:24.599 [2024-12-09 15:57:19.496813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.599 [2024-12-09 15:57:19.496836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bf7530 with addr=10.0.0.2, port=8010 00:24:24.599 [2024-12-09 15:57:19.496851] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:24.599 [2024-12-09 15:57:19.496857] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:24.599 [2024-12-09 15:57:19.496862] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:25.535 [2024-12-09 15:57:20.498994] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:25.535 request: 00:24:25.535 { 00:24:25.535 "name": "nvme_second", 00:24:25.535 "trtype": "tcp", 00:24:25.535 "traddr": "10.0.0.2", 00:24:25.535 "adrfam": "ipv4", 00:24:25.535 "trsvcid": "8010", 00:24:25.535 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:25.535 "wait_for_attach": false, 00:24:25.535 "attach_timeout_ms": 3000, 00:24:25.535 "method": "bdev_nvme_start_discovery", 00:24:25.535 "req_id": 1 00:24:25.535 } 00:24:25.535 Got JSON-RPC error response 00:24:25.535 response: 00:24:25.535 { 00:24:25.535 "code": -110, 00:24:25.535 "message": "Connection timed out" 00:24:25.535 } 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2103065 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:25.535 rmmod nvme_tcp 00:24:25.535 rmmod nvme_fabrics 00:24:25.535 rmmod nvme_keyring 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2103043 ']' 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2103043 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2103043 ']' 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2103043 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2103043 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2103043' 00:24:25.535 killing process with pid 2103043 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2103043 00:24:25.535 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2103043 00:24:25.794 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:25.794 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:25.794 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:25.794 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:25.794 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:25.794 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:25.794 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:25.794 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:25.794 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:25.794 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.794 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.794 15:57:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.699 15:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:27.958 00:24:27.958 real 0m17.206s 00:24:27.958 user 0m20.498s 00:24:27.958 sys 0m5.833s 00:24:27.958 15:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:27.958 15:57:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.958 ************************************ 00:24:27.958 END TEST nvmf_host_discovery 00:24:27.958 ************************************ 00:24:27.958 15:57:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:27.958 15:57:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:27.958 15:57:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:27.958 15:57:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.958 ************************************ 00:24:27.958 START TEST nvmf_host_multipath_status 00:24:27.958 ************************************ 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:27.958 * Looking for test storage... 00:24:27.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.958 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:28.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.218 --rc genhtml_branch_coverage=1 00:24:28.218 --rc genhtml_function_coverage=1 00:24:28.218 --rc genhtml_legend=1 00:24:28.218 --rc geninfo_all_blocks=1 00:24:28.218 --rc geninfo_unexecuted_blocks=1 00:24:28.218 00:24:28.218 ' 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:28.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.218 --rc genhtml_branch_coverage=1 00:24:28.218 --rc genhtml_function_coverage=1 00:24:28.218 --rc genhtml_legend=1 00:24:28.218 --rc geninfo_all_blocks=1 00:24:28.218 --rc geninfo_unexecuted_blocks=1 00:24:28.218 00:24:28.218 ' 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:28.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.218 --rc genhtml_branch_coverage=1 00:24:28.218 --rc genhtml_function_coverage=1 00:24:28.218 --rc genhtml_legend=1 00:24:28.218 --rc geninfo_all_blocks=1 00:24:28.218 --rc geninfo_unexecuted_blocks=1 00:24:28.218 00:24:28.218 ' 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:28.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.218 --rc genhtml_branch_coverage=1 00:24:28.218 --rc genhtml_function_coverage=1 00:24:28.218 --rc genhtml_legend=1 00:24:28.218 --rc geninfo_all_blocks=1 00:24:28.218 --rc geninfo_unexecuted_blocks=1 00:24:28.218 00:24:28.218 ' 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.218 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:28.219 15:57:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:34.788 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:34.788 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.788 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:34.789 Found net devices under 0000:af:00.0: cvl_0_0 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:34.789 Found net devices under 0000:af:00.1: cvl_0_1 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.789 15:57:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:34.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:24:34.789 00:24:34.789 --- 10.0.0.2 ping statistics --- 00:24:34.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.789 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:24:34.789 00:24:34.789 --- 10.0.0.1 ping statistics --- 00:24:34.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.789 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2108090 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2108090 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2108090 ']' 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.789 [2024-12-09 15:57:29.272071] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:24:34.789 [2024-12-09 15:57:29.272123] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.789 [2024-12-09 15:57:29.352133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:34.789 [2024-12-09 15:57:29.392600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.789 [2024-12-09 15:57:29.392635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.789 [2024-12-09 15:57:29.392642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.789 [2024-12-09 15:57:29.392648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.789 [2024-12-09 15:57:29.392653] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.789 [2024-12-09 15:57:29.393801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.789 [2024-12-09 15:57:29.393802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2108090 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:34.789 [2024-12-09 15:57:29.694709] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:34.789 Malloc0 00:24:34.789 15:57:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:35.048 15:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:35.307 15:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.565 [2024-12-09 15:57:30.556545] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.565 15:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:35.566 [2024-12-09 15:57:30.740980] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:35.566 15:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:35.566 15:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2108343 00:24:35.566 15:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:35.566 15:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2108343 /var/tmp/bdevperf.sock 00:24:35.566 15:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2108343 ']' 00:24:35.566 15:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.566 15:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.566 15:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.566 15:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.566 15:57:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:35.824 15:57:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.824 15:57:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:35.824 15:57:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:36.083 15:57:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:36.650 Nvme0n1 00:24:36.650 15:57:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:36.908 Nvme0n1 00:24:37.167 15:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:37.167 15:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:39.072 15:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:39.072 15:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:39.331 15:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:39.331 15:57:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:40.709 15:57:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:40.709 15:57:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:40.709 15:57:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.709 15:57:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:40.709 15:57:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.709 15:57:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:40.709 15:57:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.709 15:57:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:40.968 15:57:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:40.968 15:57:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:40.968 15:57:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.968 15:57:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:40.968 15:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.968 15:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:40.968 15:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.968 15:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:41.227 15:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.227 15:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:41.227 15:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.227 15:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:41.486 15:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.486 15:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:41.486 15:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.486 15:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:41.745 15:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.745 15:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:41.745 15:57:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:42.003 15:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:42.003 15:57:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:43.381 15:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:43.382 15:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:43.382 15:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.382 15:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:43.382 15:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:43.382 15:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:43.382 15:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.382 15:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:43.711 15:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.711 15:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:43.711 15:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.711 15:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:43.711 15:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.711 15:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:43.711 15:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:43.711 15:57:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.003 15:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.003 15:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:44.003 15:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:44.003 15:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.262 15:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.262 15:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:44.262 15:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:44.262 15:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.521 15:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.521 15:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:44.521 15:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:44.521 15:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:44.780 15:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:45.716 15:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:45.716 15:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:45.974 15:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.974 15:57:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:45.974 15:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.974 15:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:45.974 15:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.974 15:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:46.233 15:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:46.233 15:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:46.233 15:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.233 15:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:46.492 15:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.492 15:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:46.492 15:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.492 15:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:46.750 15:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.750 15:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:46.750 15:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.750 15:57:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:47.008 15:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.008 15:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:47.008 15:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.008 15:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:47.268 15:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.268 15:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:47.268 15:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:47.268 15:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:47.527 15:57:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:48.461 15:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:48.461 15:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:48.461 15:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.461 15:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:48.720 15:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.720 15:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:48.720 15:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.720 15:57:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:48.978 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:48.979 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:48.979 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.979 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:49.237 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.237 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:49.237 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.237 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:49.496 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.496 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:49.496 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.496 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:49.755 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.755 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:49.755 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.755 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:49.755 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:49.755 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:49.755 15:57:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:50.013 15:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:50.271 15:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:51.207 15:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:51.207 15:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:51.207 15:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.207 15:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:51.465 15:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:51.465 15:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:51.465 15:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.465 15:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:51.724 15:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:51.724 15:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:51.724 15:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.724 15:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:51.983 15:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.983 15:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:51.983 15:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.983 15:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:51.983 15:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.983 15:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:51.983 15:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.983 15:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:52.242 15:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.242 15:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:52.242 15:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.242 15:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:52.501 15:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.501 15:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:52.501 15:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:52.759 15:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:52.759 15:57:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:54.136 15:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:54.136 15:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:54.136 15:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.136 15:57:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:54.136 15:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:54.136 15:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:54.136 15:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.136 15:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:54.395 15:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.395 15:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:54.395 15:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:54.395 15:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.395 15:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.395 15:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:54.395 15:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.395 15:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:54.654 15:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.654 15:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:54.654 15:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.654 15:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:54.913 15:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:54.913 15:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:54.913 15:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:54.913 15:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.172 15:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.172 15:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:55.431 15:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:55.431 15:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:55.690 15:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:55.690 15:57:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:57.067 15:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:57.067 15:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:57.067 15:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.067 15:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:57.067 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.067 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:57.067 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.067 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:57.326 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.327 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:57.327 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.327 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:57.327 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.327 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:57.327 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:57.327 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.585 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.586 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:57.586 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.586 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:57.844 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.844 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:57.844 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.844 15:57:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:58.103 15:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.103 15:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:58.103 15:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:58.362 15:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:58.362 15:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:59.739 15:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:59.739 15:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:59.739 15:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.739 15:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:59.739 15:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:59.739 15:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:59.739 15:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.739 15:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:59.739 15:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.739 15:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:59.739 15:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.739 15:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:59.998 15:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:59.998 15:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:59.998 15:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.998 15:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:00.256 15:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.256 15:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:00.256 15:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.256 15:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:00.515 15:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.515 15:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:00.515 15:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.515 15:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:00.773 15:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.773 15:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:00.773 15:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:01.032 15:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:01.032 15:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:02.409 15:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:02.409 15:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:02.409 15:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.409 15:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:02.409 15:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.409 15:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:02.409 15:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.409 15:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:02.409 15:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.409 15:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:02.409 15:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.409 15:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:02.668 15:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.668 15:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:02.668 15:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.668 15:57:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:02.927 15:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.927 15:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:02.927 15:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.927 15:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:03.186 15:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.186 15:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:03.186 15:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.186 15:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:03.445 15:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.445 15:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:03.445 15:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:03.703 15:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:03.703 15:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:05.080 15:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:05.080 15:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:05.080 15:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.080 15:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:05.080 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.080 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:05.080 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.080 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:05.338 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:05.338 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:05.338 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.339 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:05.597 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.597 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:05.597 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.597 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:05.597 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.597 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:05.597 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.597 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:05.855 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.855 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:05.855 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.855 15:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:06.115 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:06.115 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2108343 00:25:06.115 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2108343 ']' 00:25:06.115 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2108343 00:25:06.115 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:06.115 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.115 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2108343 00:25:06.115 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:06.115 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:06.115 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2108343' 00:25:06.115 killing process with pid 2108343 00:25:06.115 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2108343 00:25:06.115 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2108343 00:25:06.115 { 00:25:06.115 "results": [ 00:25:06.115 { 00:25:06.115 "job": "Nvme0n1", 00:25:06.115 "core_mask": "0x4", 00:25:06.115 "workload": "verify", 00:25:06.115 "status": "terminated", 00:25:06.115 "verify_range": { 00:25:06.115 "start": 0, 00:25:06.115 "length": 16384 00:25:06.115 }, 00:25:06.115 "queue_depth": 128, 00:25:06.115 "io_size": 4096, 00:25:06.115 "runtime": 28.930019, 00:25:06.115 "iops": 10780.670417119325, 00:25:06.115 "mibps": 42.11199381687236, 00:25:06.115 "io_failed": 0, 00:25:06.115 "io_timeout": 0, 00:25:06.115 "avg_latency_us": 11853.725314541301, 00:25:06.115 "min_latency_us": 889.4171428571428, 00:25:06.115 "max_latency_us": 3019898.88 00:25:06.115 } 00:25:06.115 ], 00:25:06.115 "core_count": 1 00:25:06.115 } 00:25:06.377 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2108343 00:25:06.377 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.377 [2024-12-09 15:57:30.809483] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:25:06.377 [2024-12-09 15:57:30.809533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2108343 ] 00:25:06.377 [2024-12-09 15:57:30.882018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.377 [2024-12-09 15:57:30.921428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.377 Running I/O for 90 seconds... 00:25:06.377 11501.00 IOPS, 44.93 MiB/s [2024-12-09T14:58:01.605Z] 11566.50 IOPS, 45.18 MiB/s [2024-12-09T14:58:01.605Z] 11594.67 IOPS, 45.29 MiB/s [2024-12-09T14:58:01.605Z] 11606.00 IOPS, 45.34 MiB/s [2024-12-09T14:58:01.605Z] 11625.40 IOPS, 45.41 MiB/s [2024-12-09T14:58:01.605Z] 11621.83 IOPS, 45.40 MiB/s [2024-12-09T14:58:01.605Z] 11602.43 IOPS, 45.32 MiB/s [2024-12-09T14:58:01.605Z] 11619.00 IOPS, 45.39 MiB/s [2024-12-09T14:58:01.605Z] 11620.89 IOPS, 45.39 MiB/s [2024-12-09T14:58:01.605Z] 11592.50 IOPS, 45.28 MiB/s [2024-12-09T14:58:01.605Z] 11603.82 IOPS, 45.33 MiB/s [2024-12-09T14:58:01.605Z] 11606.25 IOPS, 45.34 MiB/s [2024-12-09T14:58:01.605Z] [2024-12-09 15:57:45.142340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.377 [2024-12-09 15:57:45.142377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:06.377 [2024-12-09 15:57:45.142413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.377 [2024-12-09 15:57:45.142422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:06.377 [2024-12-09 15:57:45.142435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.377 [2024-12-09 15:57:45.142443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:06.377 [2024-12-09 15:57:45.142455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.377 [2024-12-09 15:57:45.142463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:06.377 [2024-12-09 15:57:45.142474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.377 [2024-12-09 15:57:45.142482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:06.377 [2024-12-09 15:57:45.142494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.377 [2024-12-09 15:57:45.142500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:06.377 [2024-12-09 15:57:45.142514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.377 [2024-12-09 15:57:45.142521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:06.377 [2024-12-09 15:57:45.142533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.142541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.142553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.142562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.142575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.142590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.142603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.142609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.142621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.142629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.142642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.142649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.142662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.142668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.142682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.142689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.142703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.142711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.378 [2024-12-09 15:57:45.143793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-12-09 15:57:45.143800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.143813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.143820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.143832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.143848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.143861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.143867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.143880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.143887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.143899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.143906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.143919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.143926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.143939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.143945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.143958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.143965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.143978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.143984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.143997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.379 [2024-12-09 15:57:45.144449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.379 [2024-12-09 15:57:45.144472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.379 [2024-12-09 15:57:45.144495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.379 [2024-12-09 15:57:45.144517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.379 [2024-12-09 15:57:45.144539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.379 [2024-12-09 15:57:45.144563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.379 [2024-12-09 15:57:45.144585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:06.379 [2024-12-09 15:57:45.144740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-12-09 15:57:45.144747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.144763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.144770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.144787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.144793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.144809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.144815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.144831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.144838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.144854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.144860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.144876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.144883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.144899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.144907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.144922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.144928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.144946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.144953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.144969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.144977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.144992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.144999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-12-09 15:57:45.145754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.380 [2024-12-09 15:57:45.145771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.381 [2024-12-09 15:57:45.145777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:45.145796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.381 [2024-12-09 15:57:45.145803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.381 11500.23 IOPS, 44.92 MiB/s [2024-12-09T14:58:01.609Z] 10678.79 IOPS, 41.71 MiB/s [2024-12-09T14:58:01.609Z] 9966.87 IOPS, 38.93 MiB/s [2024-12-09T14:58:01.609Z] 9426.06 IOPS, 36.82 MiB/s [2024-12-09T14:58:01.609Z] 9555.65 IOPS, 37.33 MiB/s [2024-12-09T14:58:01.609Z] 9663.28 IOPS, 37.75 MiB/s [2024-12-09T14:58:01.609Z] 9830.32 IOPS, 38.40 MiB/s [2024-12-09T14:58:01.609Z] 10024.05 IOPS, 39.16 MiB/s [2024-12-09T14:58:01.609Z] 10202.52 IOPS, 39.85 MiB/s [2024-12-09T14:58:01.609Z] 10275.82 IOPS, 40.14 MiB/s [2024-12-09T14:58:01.609Z] 10332.70 IOPS, 40.36 MiB/s [2024-12-09T14:58:01.609Z] 10382.92 IOPS, 40.56 MiB/s [2024-12-09T14:58:01.609Z] 10515.76 IOPS, 41.08 MiB/s [2024-12-09T14:58:01.609Z] 10639.27 IOPS, 41.56 MiB/s [2024-12-09T14:58:01.609Z] [2024-12-09 15:57:58.897245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.381 [2024-12-09 15:57:58.897286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.381 [2024-12-09 15:57:58.897433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.381 [2024-12-09 15:57:58.897456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.381 [2024-12-09 15:57:58.897477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.381 [2024-12-09 15:57:58.897496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.381 [2024-12-09 15:57:58.897516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.897993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.897999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.898012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:50456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.898019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.898031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.898038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.898051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.898057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.898070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.898076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.898089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.898096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.381 [2024-12-09 15:57:58.898109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-12-09 15:57:58.898116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-12-09 15:57:58.898683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-12-09 15:57:58.898704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-12-09 15:57:58.898805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-12-09 15:57:58.898823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:06.382 [2024-12-09 15:57:58.898969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-12-09 15:57:58.898975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.382 10724.67 IOPS, 41.89 MiB/s [2024-12-09T14:58:01.610Z] 10756.00 IOPS, 42.02 MiB/s [2024-12-09T14:58:01.610Z] Received shutdown signal, test time was about 28.930666 seconds 00:25:06.382 00:25:06.382 Latency(us) 00:25:06.382 [2024-12-09T14:58:01.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.382 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:06.382 Verification LBA range: start 0x0 length 0x4000 00:25:06.382 Nvme0n1 : 28.93 10780.67 42.11 0.00 0.00 11853.73 889.42 3019898.88 00:25:06.382 [2024-12-09T14:58:01.610Z] =================================================================================================================== 00:25:06.382 [2024-12-09T14:58:01.610Z] Total : 10780.67 42.11 0.00 0.00 11853.73 889.42 3019898.88 00:25:06.383 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.383 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:06.383 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.383 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:06.383 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:06.383 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:06.383 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:06.383 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:06.383 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:06.383 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:06.383 rmmod nvme_tcp 00:25:06.640 rmmod nvme_fabrics 00:25:06.641 rmmod nvme_keyring 00:25:06.641 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:06.641 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:06.641 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:06.641 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2108090 ']' 00:25:06.641 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2108090 00:25:06.641 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2108090 ']' 00:25:06.641 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2108090 00:25:06.641 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:06.641 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.641 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2108090 00:25:06.641 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:06.641 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:06.641 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2108090' 00:25:06.641 killing process with pid 2108090 00:25:06.641 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2108090 00:25:06.641 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2108090 00:25:06.899 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:06.899 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:06.899 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:06.899 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:06.899 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:06.899 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:06.899 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:06.899 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:06.899 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:06.899 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.899 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.899 15:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.802 15:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:08.802 00:25:08.802 real 0m40.954s 00:25:08.802 user 1m50.886s 00:25:08.802 sys 0m11.592s 00:25:08.802 15:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:08.802 15:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:08.802 ************************************ 00:25:08.802 END TEST nvmf_host_multipath_status 00:25:08.802 ************************************ 00:25:08.802 15:58:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:08.802 15:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:08.802 15:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:08.802 15:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.061 ************************************ 00:25:09.061 START TEST nvmf_discovery_remove_ifc 00:25:09.061 ************************************ 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:09.061 * Looking for test storage... 00:25:09.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:09.061 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:09.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.061 --rc genhtml_branch_coverage=1 00:25:09.062 --rc genhtml_function_coverage=1 00:25:09.062 --rc genhtml_legend=1 00:25:09.062 --rc geninfo_all_blocks=1 00:25:09.062 --rc geninfo_unexecuted_blocks=1 00:25:09.062 00:25:09.062 ' 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:09.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.062 --rc genhtml_branch_coverage=1 00:25:09.062 --rc genhtml_function_coverage=1 00:25:09.062 --rc genhtml_legend=1 00:25:09.062 --rc geninfo_all_blocks=1 00:25:09.062 --rc geninfo_unexecuted_blocks=1 00:25:09.062 00:25:09.062 ' 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:09.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.062 --rc genhtml_branch_coverage=1 00:25:09.062 --rc genhtml_function_coverage=1 00:25:09.062 --rc genhtml_legend=1 00:25:09.062 --rc geninfo_all_blocks=1 00:25:09.062 --rc geninfo_unexecuted_blocks=1 00:25:09.062 00:25:09.062 ' 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:09.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.062 --rc genhtml_branch_coverage=1 00:25:09.062 --rc genhtml_function_coverage=1 00:25:09.062 --rc genhtml_legend=1 00:25:09.062 --rc geninfo_all_blocks=1 00:25:09.062 --rc geninfo_unexecuted_blocks=1 00:25:09.062 00:25:09.062 ' 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:09.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:09.062 15:58:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:15.628 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:15.628 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:15.628 Found net devices under 0000:af:00.0: cvl_0_0 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.628 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:15.629 Found net devices under 0000:af:00.1: cvl_0_1 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.629 15:58:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:15.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:25:15.629 00:25:15.629 --- 10.0.0.2 ping statistics --- 00:25:15.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.629 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:25:15.629 00:25:15.629 --- 10.0.0.1 ping statistics --- 00:25:15.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.629 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2117008 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2117008 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2117008 ']' 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.629 [2024-12-09 15:58:10.239086] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:25:15.629 [2024-12-09 15:58:10.239131] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.629 [2024-12-09 15:58:10.318158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.629 [2024-12-09 15:58:10.358569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.629 [2024-12-09 15:58:10.358606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.629 [2024-12-09 15:58:10.358614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.629 [2024-12-09 15:58:10.358620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.629 [2024-12-09 15:58:10.358625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.629 [2024-12-09 15:58:10.359140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.629 [2024-12-09 15:58:10.503483] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.629 [2024-12-09 15:58:10.511631] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:15.629 null0 00:25:15.629 [2024-12-09 15:58:10.543628] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2117031 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2117031 /tmp/host.sock 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2117031 ']' 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:15.629 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.629 [2024-12-09 15:58:10.610559] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:25:15.629 [2024-12-09 15:58:10.610598] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117031 ] 00:25:15.629 [2024-12-09 15:58:10.682689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.629 [2024-12-09 15:58:10.722037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.629 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.630 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:15.630 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.630 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.888 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.888 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:15.888 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.888 15:58:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.823 [2024-12-09 15:58:11.920374] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:16.823 [2024-12-09 15:58:11.920392] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:16.823 [2024-12-09 15:58:11.920404] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:16.823 [2024-12-09 15:58:12.007677] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:17.081 [2024-12-09 15:58:12.069295] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:17.081 [2024-12-09 15:58:12.070022] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1796210:1 started. 00:25:17.081 [2024-12-09 15:58:12.071331] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:17.081 [2024-12-09 15:58:12.071371] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:17.081 [2024-12-09 15:58:12.071390] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:17.081 [2024-12-09 15:58:12.071401] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:17.081 [2024-12-09 15:58:12.071418] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.081 [2024-12-09 15:58:12.078761] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1796210 was disconnected and freed. delete nvme_qpair. 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:17.081 15:58:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:18.457 15:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:18.457 15:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.457 15:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:18.457 15:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.457 15:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:18.457 15:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.457 15:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:18.457 15:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.457 15:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:18.457 15:58:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:19.392 15:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:19.392 15:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.392 15:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:19.392 15:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.392 15:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:19.392 15:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.392 15:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:19.392 15:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.392 15:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:19.392 15:58:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:20.327 15:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:20.327 15:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.327 15:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:20.327 15:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.327 15:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:20.327 15:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:20.327 15:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:20.327 15:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.327 15:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:20.327 15:58:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:21.263 15:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:21.263 15:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.263 15:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.263 15:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.263 15:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:21.263 15:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:21.263 15:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:21.263 15:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.263 15:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:21.263 15:58:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:22.639 15:58:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:22.639 15:58:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.639 15:58:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:22.639 15:58:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.639 15:58:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:22.639 15:58:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.639 15:58:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:22.639 15:58:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.639 [2024-12-09 15:58:17.512929] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:22.639 [2024-12-09 15:58:17.512963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.639 [2024-12-09 15:58:17.512973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.639 [2024-12-09 15:58:17.512981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.639 [2024-12-09 15:58:17.512989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.639 [2024-12-09 15:58:17.512998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.639 [2024-12-09 15:58:17.513006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.639 [2024-12-09 15:58:17.513016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.639 [2024-12-09 15:58:17.513024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.639 [2024-12-09 15:58:17.513032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.639 [2024-12-09 15:58:17.513039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.639 [2024-12-09 15:58:17.513047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772a10 is same with the state(6) to be set 00:25:22.639 15:58:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:22.639 15:58:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:22.639 [2024-12-09 15:58:17.522952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1772a10 (9): Bad file descriptor 00:25:22.639 [2024-12-09 15:58:17.532985] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:22.639 [2024-12-09 15:58:17.532995] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:22.639 [2024-12-09 15:58:17.533001] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:22.639 [2024-12-09 15:58:17.533006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:22.639 [2024-12-09 15:58:17.533024] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:23.576 15:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:23.576 15:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.576 15:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:23.576 15:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.576 15:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:23.576 15:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.576 15:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:23.576 [2024-12-09 15:58:18.586316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:23.576 [2024-12-09 15:58:18.586396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1772a10 with addr=10.0.0.2, port=4420 00:25:23.576 [2024-12-09 15:58:18.586426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1772a10 is same with the state(6) to be set 00:25:23.576 [2024-12-09 15:58:18.586477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1772a10 (9): Bad file descriptor 00:25:23.576 [2024-12-09 15:58:18.587417] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:23.576 [2024-12-09 15:58:18.587479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:23.576 [2024-12-09 15:58:18.587502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:23.576 [2024-12-09 15:58:18.587525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:23.576 [2024-12-09 15:58:18.587544] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:23.576 [2024-12-09 15:58:18.587559] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:23.576 [2024-12-09 15:58:18.587572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:23.576 [2024-12-09 15:58:18.587592] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:23.576 [2024-12-09 15:58:18.587606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:23.576 15:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.576 15:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:23.576 15:58:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:24.512 [2024-12-09 15:58:19.590119] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:24.512 [2024-12-09 15:58:19.590138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:24.512 [2024-12-09 15:58:19.590148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:24.512 [2024-12-09 15:58:19.590154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:24.512 [2024-12-09 15:58:19.590164] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:24.512 [2024-12-09 15:58:19.590170] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:24.512 [2024-12-09 15:58:19.590175] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:24.512 [2024-12-09 15:58:19.590178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:24.512 [2024-12-09 15:58:19.590195] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:24.512 [2024-12-09 15:58:19.590213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.512 [2024-12-09 15:58:19.590226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.512 [2024-12-09 15:58:19.590234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.512 [2024-12-09 15:58:19.590240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.512 [2024-12-09 15:58:19.590247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.512 [2024-12-09 15:58:19.590254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.512 [2024-12-09 15:58:19.590260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.512 [2024-12-09 15:58:19.590267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.512 [2024-12-09 15:58:19.590274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.512 [2024-12-09 15:58:19.590280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.512 [2024-12-09 15:58:19.590286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:24.512 [2024-12-09 15:58:19.590624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1761d20 (9): Bad file descriptor 00:25:24.512 [2024-12-09 15:58:19.591633] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:24.512 [2024-12-09 15:58:19.591644] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:24.512 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.512 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.512 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.512 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.512 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.512 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.512 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.512 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.512 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:24.512 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:24.512 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:24.771 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:24.771 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.771 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.771 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.771 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.771 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.771 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.771 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.771 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.771 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:24.771 15:58:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:25.707 15:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.707 15:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.707 15:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.707 15:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.707 15:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.707 15:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.707 15:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.707 15:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.707 15:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:25.707 15:58:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:26.644 [2024-12-09 15:58:21.645341] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:26.644 [2024-12-09 15:58:21.645359] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:26.644 [2024-12-09 15:58:21.645372] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:26.644 [2024-12-09 15:58:21.731705] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:26.644 [2024-12-09 15:58:21.827334] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:26.644 [2024-12-09 15:58:21.827927] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x179f980:1 started. 00:25:26.644 [2024-12-09 15:58:21.828947] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:26.644 [2024-12-09 15:58:21.828978] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:26.644 [2024-12-09 15:58:21.828996] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:26.644 [2024-12-09 15:58:21.829007] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:26.644 [2024-12-09 15:58:21.829014] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:26.644 [2024-12-09 15:58:21.833808] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x179f980 was disconnected and freed. delete nvme_qpair. 00:25:26.644 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:26.644 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.644 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:26.644 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.644 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:26.644 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:26.644 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:26.903 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.903 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:26.903 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:26.903 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2117031 00:25:26.903 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2117031 ']' 00:25:26.903 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2117031 00:25:26.903 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:26.903 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:26.903 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2117031 00:25:26.903 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:26.903 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:26.903 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2117031' 00:25:26.903 killing process with pid 2117031 00:25:26.903 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2117031 00:25:26.903 15:58:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2117031 00:25:26.903 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:26.903 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:26.903 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:26.903 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:26.903 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:26.903 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:26.903 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:26.903 rmmod nvme_tcp 00:25:26.903 rmmod nvme_fabrics 00:25:27.161 rmmod nvme_keyring 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2117008 ']' 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2117008 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2117008 ']' 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2117008 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2117008 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2117008' 00:25:27.161 killing process with pid 2117008 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2117008 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2117008 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:27.161 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:27.419 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:27.419 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:27.419 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.419 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.419 15:58:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.320 15:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:29.320 00:25:29.320 real 0m20.413s 00:25:29.320 user 0m24.655s 00:25:29.320 sys 0m5.732s 00:25:29.320 15:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:29.320 15:58:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:29.320 ************************************ 00:25:29.320 END TEST nvmf_discovery_remove_ifc 00:25:29.320 ************************************ 00:25:29.320 15:58:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:29.320 15:58:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:29.320 15:58:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:29.320 15:58:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.320 ************************************ 00:25:29.320 START TEST nvmf_identify_kernel_target 00:25:29.320 ************************************ 00:25:29.320 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:29.611 * Looking for test storage... 00:25:29.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:29.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.611 --rc genhtml_branch_coverage=1 00:25:29.611 --rc genhtml_function_coverage=1 00:25:29.611 --rc genhtml_legend=1 00:25:29.611 --rc geninfo_all_blocks=1 00:25:29.611 --rc geninfo_unexecuted_blocks=1 00:25:29.611 00:25:29.611 ' 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:29.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.611 --rc genhtml_branch_coverage=1 00:25:29.611 --rc genhtml_function_coverage=1 00:25:29.611 --rc genhtml_legend=1 00:25:29.611 --rc geninfo_all_blocks=1 00:25:29.611 --rc geninfo_unexecuted_blocks=1 00:25:29.611 00:25:29.611 ' 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:29.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.611 --rc genhtml_branch_coverage=1 00:25:29.611 --rc genhtml_function_coverage=1 00:25:29.611 --rc genhtml_legend=1 00:25:29.611 --rc geninfo_all_blocks=1 00:25:29.611 --rc geninfo_unexecuted_blocks=1 00:25:29.611 00:25:29.611 ' 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:29.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.611 --rc genhtml_branch_coverage=1 00:25:29.611 --rc genhtml_function_coverage=1 00:25:29.611 --rc genhtml_legend=1 00:25:29.611 --rc geninfo_all_blocks=1 00:25:29.611 --rc geninfo_unexecuted_blocks=1 00:25:29.611 00:25:29.611 ' 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.611 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:29.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:29.612 15:58:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:36.309 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:36.309 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:36.309 Found net devices under 0000:af:00.0: cvl_0_0 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:36.309 Found net devices under 0000:af:00.1: cvl_0_1 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:36.309 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:36.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:25:36.310 00:25:36.310 --- 10.0.0.2 ping statistics --- 00:25:36.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.310 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:36.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:25:36.310 00:25:36.310 --- 10.0.0.1 ping statistics --- 00:25:36.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.310 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:36.310 15:58:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:38.215 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:25:38.473 Waiting for block devices as requested 00:25:38.473 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:38.732 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:38.732 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:38.732 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:38.732 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:38.991 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:38.991 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:38.991 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:39.249 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:39.249 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:39.249 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:39.508 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:39.508 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:39.508 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:39.508 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:39.767 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:39.767 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:39.767 15:58:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:39.767 15:58:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:39.767 15:58:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:39.767 15:58:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:39.767 15:58:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:39.767 15:58:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:39.767 15:58:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:39.767 15:58:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:39.767 15:58:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:40.026 No valid GPT data, bailing 00:25:40.026 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:40.026 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:40.026 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:40.026 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:40.026 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:40.026 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:40.026 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:25:40.026 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:25:40.026 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:40.026 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:40.026 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:25:40.026 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:25:40.026 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:25:40.026 No valid GPT data, bailing 00:25:40.026 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:40.026 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:40.026 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:40.026 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # continue 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:40.027 00:25:40.027 Discovery Log Number of Records 2, Generation counter 2 00:25:40.027 =====Discovery Log Entry 0====== 00:25:40.027 trtype: tcp 00:25:40.027 adrfam: ipv4 00:25:40.027 subtype: current discovery subsystem 00:25:40.027 treq: not specified, sq flow control disable supported 00:25:40.027 portid: 1 00:25:40.027 trsvcid: 4420 00:25:40.027 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:40.027 traddr: 10.0.0.1 00:25:40.027 eflags: none 00:25:40.027 sectype: none 00:25:40.027 =====Discovery Log Entry 1====== 00:25:40.027 trtype: tcp 00:25:40.027 adrfam: ipv4 00:25:40.027 subtype: nvme subsystem 00:25:40.027 treq: not specified, sq flow control disable supported 00:25:40.027 portid: 1 00:25:40.027 trsvcid: 4420 00:25:40.027 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:40.027 traddr: 10.0.0.1 00:25:40.027 eflags: none 00:25:40.027 sectype: none 00:25:40.027 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:40.027 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:40.286 ===================================================== 00:25:40.286 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:40.286 ===================================================== 00:25:40.286 Controller Capabilities/Features 00:25:40.286 ================================ 00:25:40.286 Vendor ID: 0000 00:25:40.286 Subsystem Vendor ID: 0000 00:25:40.286 Serial Number: 7933a50e225d4a9cc01f 00:25:40.286 Model Number: Linux 00:25:40.286 Firmware Version: 6.8.9-20 00:25:40.286 Recommended Arb Burst: 0 00:25:40.286 IEEE OUI Identifier: 00 00 00 00:25:40.286 Multi-path I/O 00:25:40.286 May have multiple subsystem ports: No 00:25:40.286 May have multiple controllers: No 00:25:40.286 Associated with SR-IOV VF: No 00:25:40.286 Max Data Transfer Size: Unlimited 00:25:40.286 Max Number of Namespaces: 0 00:25:40.287 Max Number of I/O Queues: 1024 00:25:40.287 NVMe Specification Version (VS): 1.3 00:25:40.287 NVMe Specification Version (Identify): 1.3 00:25:40.287 Maximum Queue Entries: 1024 00:25:40.287 Contiguous Queues Required: No 00:25:40.287 Arbitration Mechanisms Supported 00:25:40.287 Weighted Round Robin: Not Supported 00:25:40.287 Vendor Specific: Not Supported 00:25:40.287 Reset Timeout: 7500 ms 00:25:40.287 Doorbell Stride: 4 bytes 00:25:40.287 NVM Subsystem Reset: Not Supported 00:25:40.287 Command Sets Supported 00:25:40.287 NVM Command Set: Supported 00:25:40.287 Boot Partition: Not Supported 00:25:40.287 Memory Page Size Minimum: 4096 bytes 00:25:40.287 Memory Page Size Maximum: 4096 bytes 00:25:40.287 Persistent Memory Region: Not Supported 00:25:40.287 Optional Asynchronous Events Supported 00:25:40.287 Namespace Attribute Notices: Not Supported 00:25:40.287 Firmware Activation Notices: Not Supported 00:25:40.287 ANA Change Notices: Not Supported 00:25:40.287 PLE Aggregate Log Change Notices: Not Supported 00:25:40.287 LBA Status Info Alert Notices: Not Supported 00:25:40.287 EGE Aggregate Log Change Notices: Not Supported 00:25:40.287 Normal NVM Subsystem Shutdown event: Not Supported 00:25:40.287 Zone Descriptor Change Notices: Not Supported 00:25:40.287 Discovery Log Change Notices: Supported 00:25:40.287 Controller Attributes 00:25:40.287 128-bit Host Identifier: Not Supported 00:25:40.287 Non-Operational Permissive Mode: Not Supported 00:25:40.287 NVM Sets: Not Supported 00:25:40.287 Read Recovery Levels: Not Supported 00:25:40.287 Endurance Groups: Not Supported 00:25:40.287 Predictable Latency Mode: Not Supported 00:25:40.287 Traffic Based Keep ALive: Not Supported 00:25:40.287 Namespace Granularity: Not Supported 00:25:40.287 SQ Associations: Not Supported 00:25:40.287 UUID List: Not Supported 00:25:40.287 Multi-Domain Subsystem: Not Supported 00:25:40.287 Fixed Capacity Management: Not Supported 00:25:40.287 Variable Capacity Management: Not Supported 00:25:40.287 Delete Endurance Group: Not Supported 00:25:40.287 Delete NVM Set: Not Supported 00:25:40.287 Extended LBA Formats Supported: Not Supported 00:25:40.287 Flexible Data Placement Supported: Not Supported 00:25:40.287 00:25:40.287 Controller Memory Buffer Support 00:25:40.287 ================================ 00:25:40.287 Supported: No 00:25:40.287 00:25:40.287 Persistent Memory Region Support 00:25:40.287 ================================ 00:25:40.287 Supported: No 00:25:40.287 00:25:40.287 Admin Command Set Attributes 00:25:40.287 ============================ 00:25:40.287 Security Send/Receive: Not Supported 00:25:40.287 Format NVM: Not Supported 00:25:40.287 Firmware Activate/Download: Not Supported 00:25:40.287 Namespace Management: Not Supported 00:25:40.287 Device Self-Test: Not Supported 00:25:40.287 Directives: Not Supported 00:25:40.287 NVMe-MI: Not Supported 00:25:40.287 Virtualization Management: Not Supported 00:25:40.287 Doorbell Buffer Config: Not Supported 00:25:40.287 Get LBA Status Capability: Not Supported 00:25:40.287 Command & Feature Lockdown Capability: Not Supported 00:25:40.287 Abort Command Limit: 1 00:25:40.287 Async Event Request Limit: 1 00:25:40.287 Number of Firmware Slots: N/A 00:25:40.287 Firmware Slot 1 Read-Only: N/A 00:25:40.287 Firmware Activation Without Reset: N/A 00:25:40.287 Multiple Update Detection Support: N/A 00:25:40.287 Firmware Update Granularity: No Information Provided 00:25:40.287 Per-Namespace SMART Log: No 00:25:40.287 Asymmetric Namespace Access Log Page: Not Supported 00:25:40.287 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:40.287 Command Effects Log Page: Not Supported 00:25:40.287 Get Log Page Extended Data: Supported 00:25:40.287 Telemetry Log Pages: Not Supported 00:25:40.287 Persistent Event Log Pages: Not Supported 00:25:40.287 Supported Log Pages Log Page: May Support 00:25:40.287 Commands Supported & Effects Log Page: Not Supported 00:25:40.287 Feature Identifiers & Effects Log Page:May Support 00:25:40.287 NVMe-MI Commands & Effects Log Page: May Support 00:25:40.287 Data Area 4 for Telemetry Log: Not Supported 00:25:40.287 Error Log Page Entries Supported: 1 00:25:40.287 Keep Alive: Not Supported 00:25:40.287 00:25:40.287 NVM Command Set Attributes 00:25:40.287 ========================== 00:25:40.287 Submission Queue Entry Size 00:25:40.287 Max: 1 00:25:40.287 Min: 1 00:25:40.287 Completion Queue Entry Size 00:25:40.287 Max: 1 00:25:40.287 Min: 1 00:25:40.287 Number of Namespaces: 0 00:25:40.287 Compare Command: Not Supported 00:25:40.287 Write Uncorrectable Command: Not Supported 00:25:40.287 Dataset Management Command: Not Supported 00:25:40.287 Write Zeroes Command: Not Supported 00:25:40.287 Set Features Save Field: Not Supported 00:25:40.287 Reservations: Not Supported 00:25:40.287 Timestamp: Not Supported 00:25:40.287 Copy: Not Supported 00:25:40.287 Volatile Write Cache: Not Present 00:25:40.287 Atomic Write Unit (Normal): 1 00:25:40.287 Atomic Write Unit (PFail): 1 00:25:40.287 Atomic Compare & Write Unit: 1 00:25:40.287 Fused Compare & Write: Not Supported 00:25:40.287 Scatter-Gather List 00:25:40.287 SGL Command Set: Supported 00:25:40.287 SGL Keyed: Not Supported 00:25:40.287 SGL Bit Bucket Descriptor: Not Supported 00:25:40.287 SGL Metadata Pointer: Not Supported 00:25:40.287 Oversized SGL: Not Supported 00:25:40.287 SGL Metadata Address: Not Supported 00:25:40.287 SGL Offset: Supported 00:25:40.287 Transport SGL Data Block: Not Supported 00:25:40.287 Replay Protected Memory Block: Not Supported 00:25:40.287 00:25:40.287 Firmware Slot Information 00:25:40.287 ========================= 00:25:40.287 Active slot: 0 00:25:40.287 00:25:40.287 00:25:40.287 Error Log 00:25:40.287 ========= 00:25:40.287 00:25:40.287 Active Namespaces 00:25:40.287 ================= 00:25:40.287 Discovery Log Page 00:25:40.287 ================== 00:25:40.287 Generation Counter: 2 00:25:40.287 Number of Records: 2 00:25:40.287 Record Format: 0 00:25:40.287 00:25:40.287 Discovery Log Entry 0 00:25:40.287 ---------------------- 00:25:40.287 Transport Type: 3 (TCP) 00:25:40.287 Address Family: 1 (IPv4) 00:25:40.287 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:40.287 Entry Flags: 00:25:40.287 Duplicate Returned Information: 0 00:25:40.287 Explicit Persistent Connection Support for Discovery: 0 00:25:40.287 Transport Requirements: 00:25:40.287 Secure Channel: Not Specified 00:25:40.287 Port ID: 1 (0x0001) 00:25:40.287 Controller ID: 65535 (0xffff) 00:25:40.287 Admin Max SQ Size: 32 00:25:40.287 Transport Service Identifier: 4420 00:25:40.287 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:40.287 Transport Address: 10.0.0.1 00:25:40.287 Discovery Log Entry 1 00:25:40.287 ---------------------- 00:25:40.287 Transport Type: 3 (TCP) 00:25:40.287 Address Family: 1 (IPv4) 00:25:40.287 Subsystem Type: 2 (NVM Subsystem) 00:25:40.287 Entry Flags: 00:25:40.287 Duplicate Returned Information: 0 00:25:40.287 Explicit Persistent Connection Support for Discovery: 0 00:25:40.287 Transport Requirements: 00:25:40.287 Secure Channel: Not Specified 00:25:40.287 Port ID: 1 (0x0001) 00:25:40.287 Controller ID: 65535 (0xffff) 00:25:40.287 Admin Max SQ Size: 32 00:25:40.287 Transport Service Identifier: 4420 00:25:40.287 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:40.287 Transport Address: 10.0.0.1 00:25:40.287 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:40.287 get_feature(0x01) failed 00:25:40.287 get_feature(0x02) failed 00:25:40.287 get_feature(0x04) failed 00:25:40.287 ===================================================== 00:25:40.287 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:40.287 ===================================================== 00:25:40.287 Controller Capabilities/Features 00:25:40.287 ================================ 00:25:40.287 Vendor ID: 0000 00:25:40.287 Subsystem Vendor ID: 0000 00:25:40.287 Serial Number: f59673f285fec6ddc5c0 00:25:40.287 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:40.287 Firmware Version: 6.8.9-20 00:25:40.287 Recommended Arb Burst: 6 00:25:40.287 IEEE OUI Identifier: 00 00 00 00:25:40.287 Multi-path I/O 00:25:40.287 May have multiple subsystem ports: Yes 00:25:40.287 May have multiple controllers: Yes 00:25:40.287 Associated with SR-IOV VF: No 00:25:40.287 Max Data Transfer Size: Unlimited 00:25:40.287 Max Number of Namespaces: 1024 00:25:40.287 Max Number of I/O Queues: 128 00:25:40.287 NVMe Specification Version (VS): 1.3 00:25:40.287 NVMe Specification Version (Identify): 1.3 00:25:40.287 Maximum Queue Entries: 1024 00:25:40.287 Contiguous Queues Required: No 00:25:40.287 Arbitration Mechanisms Supported 00:25:40.287 Weighted Round Robin: Not Supported 00:25:40.287 Vendor Specific: Not Supported 00:25:40.287 Reset Timeout: 7500 ms 00:25:40.288 Doorbell Stride: 4 bytes 00:25:40.288 NVM Subsystem Reset: Not Supported 00:25:40.288 Command Sets Supported 00:25:40.288 NVM Command Set: Supported 00:25:40.288 Boot Partition: Not Supported 00:25:40.288 Memory Page Size Minimum: 4096 bytes 00:25:40.288 Memory Page Size Maximum: 4096 bytes 00:25:40.288 Persistent Memory Region: Not Supported 00:25:40.288 Optional Asynchronous Events Supported 00:25:40.288 Namespace Attribute Notices: Supported 00:25:40.288 Firmware Activation Notices: Not Supported 00:25:40.288 ANA Change Notices: Supported 00:25:40.288 PLE Aggregate Log Change Notices: Not Supported 00:25:40.288 LBA Status Info Alert Notices: Not Supported 00:25:40.288 EGE Aggregate Log Change Notices: Not Supported 00:25:40.288 Normal NVM Subsystem Shutdown event: Not Supported 00:25:40.288 Zone Descriptor Change Notices: Not Supported 00:25:40.288 Discovery Log Change Notices: Not Supported 00:25:40.288 Controller Attributes 00:25:40.288 128-bit Host Identifier: Supported 00:25:40.288 Non-Operational Permissive Mode: Not Supported 00:25:40.288 NVM Sets: Not Supported 00:25:40.288 Read Recovery Levels: Not Supported 00:25:40.288 Endurance Groups: Not Supported 00:25:40.288 Predictable Latency Mode: Not Supported 00:25:40.288 Traffic Based Keep ALive: Supported 00:25:40.288 Namespace Granularity: Not Supported 00:25:40.288 SQ Associations: Not Supported 00:25:40.288 UUID List: Not Supported 00:25:40.288 Multi-Domain Subsystem: Not Supported 00:25:40.288 Fixed Capacity Management: Not Supported 00:25:40.288 Variable Capacity Management: Not Supported 00:25:40.288 Delete Endurance Group: Not Supported 00:25:40.288 Delete NVM Set: Not Supported 00:25:40.288 Extended LBA Formats Supported: Not Supported 00:25:40.288 Flexible Data Placement Supported: Not Supported 00:25:40.288 00:25:40.288 Controller Memory Buffer Support 00:25:40.288 ================================ 00:25:40.288 Supported: No 00:25:40.288 00:25:40.288 Persistent Memory Region Support 00:25:40.288 ================================ 00:25:40.288 Supported: No 00:25:40.288 00:25:40.288 Admin Command Set Attributes 00:25:40.288 ============================ 00:25:40.288 Security Send/Receive: Not Supported 00:25:40.288 Format NVM: Not Supported 00:25:40.288 Firmware Activate/Download: Not Supported 00:25:40.288 Namespace Management: Not Supported 00:25:40.288 Device Self-Test: Not Supported 00:25:40.288 Directives: Not Supported 00:25:40.288 NVMe-MI: Not Supported 00:25:40.288 Virtualization Management: Not Supported 00:25:40.288 Doorbell Buffer Config: Not Supported 00:25:40.288 Get LBA Status Capability: Not Supported 00:25:40.288 Command & Feature Lockdown Capability: Not Supported 00:25:40.288 Abort Command Limit: 4 00:25:40.288 Async Event Request Limit: 4 00:25:40.288 Number of Firmware Slots: N/A 00:25:40.288 Firmware Slot 1 Read-Only: N/A 00:25:40.288 Firmware Activation Without Reset: N/A 00:25:40.288 Multiple Update Detection Support: N/A 00:25:40.288 Firmware Update Granularity: No Information Provided 00:25:40.288 Per-Namespace SMART Log: Yes 00:25:40.288 Asymmetric Namespace Access Log Page: Supported 00:25:40.288 ANA Transition Time : 10 sec 00:25:40.288 00:25:40.288 Asymmetric Namespace Access Capabilities 00:25:40.288 ANA Optimized State : Supported 00:25:40.288 ANA Non-Optimized State : Supported 00:25:40.288 ANA Inaccessible State : Supported 00:25:40.288 ANA Persistent Loss State : Supported 00:25:40.288 ANA Change State : Supported 00:25:40.288 ANAGRPID is not changed : No 00:25:40.288 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:40.288 00:25:40.288 ANA Group Identifier Maximum : 128 00:25:40.288 Number of ANA Group Identifiers : 128 00:25:40.288 Max Number of Allowed Namespaces : 1024 00:25:40.288 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:40.288 Command Effects Log Page: Supported 00:25:40.288 Get Log Page Extended Data: Supported 00:25:40.288 Telemetry Log Pages: Not Supported 00:25:40.288 Persistent Event Log Pages: Not Supported 00:25:40.288 Supported Log Pages Log Page: May Support 00:25:40.288 Commands Supported & Effects Log Page: Not Supported 00:25:40.288 Feature Identifiers & Effects Log Page:May Support 00:25:40.288 NVMe-MI Commands & Effects Log Page: May Support 00:25:40.288 Data Area 4 for Telemetry Log: Not Supported 00:25:40.288 Error Log Page Entries Supported: 128 00:25:40.288 Keep Alive: Supported 00:25:40.288 Keep Alive Granularity: 1000 ms 00:25:40.288 00:25:40.288 NVM Command Set Attributes 00:25:40.288 ========================== 00:25:40.288 Submission Queue Entry Size 00:25:40.288 Max: 64 00:25:40.288 Min: 64 00:25:40.288 Completion Queue Entry Size 00:25:40.288 Max: 16 00:25:40.288 Min: 16 00:25:40.288 Number of Namespaces: 1024 00:25:40.288 Compare Command: Not Supported 00:25:40.288 Write Uncorrectable Command: Not Supported 00:25:40.288 Dataset Management Command: Supported 00:25:40.288 Write Zeroes Command: Supported 00:25:40.288 Set Features Save Field: Not Supported 00:25:40.288 Reservations: Not Supported 00:25:40.288 Timestamp: Not Supported 00:25:40.288 Copy: Not Supported 00:25:40.288 Volatile Write Cache: Present 00:25:40.288 Atomic Write Unit (Normal): 1 00:25:40.288 Atomic Write Unit (PFail): 1 00:25:40.288 Atomic Compare & Write Unit: 1 00:25:40.288 Fused Compare & Write: Not Supported 00:25:40.288 Scatter-Gather List 00:25:40.288 SGL Command Set: Supported 00:25:40.288 SGL Keyed: Not Supported 00:25:40.288 SGL Bit Bucket Descriptor: Not Supported 00:25:40.288 SGL Metadata Pointer: Not Supported 00:25:40.288 Oversized SGL: Not Supported 00:25:40.288 SGL Metadata Address: Not Supported 00:25:40.288 SGL Offset: Supported 00:25:40.288 Transport SGL Data Block: Not Supported 00:25:40.288 Replay Protected Memory Block: Not Supported 00:25:40.288 00:25:40.288 Firmware Slot Information 00:25:40.288 ========================= 00:25:40.288 Active slot: 0 00:25:40.288 00:25:40.288 Asymmetric Namespace Access 00:25:40.288 =========================== 00:25:40.288 Change Count : 0 00:25:40.288 Number of ANA Group Descriptors : 1 00:25:40.288 ANA Group Descriptor : 0 00:25:40.288 ANA Group ID : 1 00:25:40.288 Number of NSID Values : 1 00:25:40.288 Change Count : 0 00:25:40.288 ANA State : 1 00:25:40.288 Namespace Identifier : 1 00:25:40.288 00:25:40.288 Commands Supported and Effects 00:25:40.288 ============================== 00:25:40.288 Admin Commands 00:25:40.288 -------------- 00:25:40.288 Get Log Page (02h): Supported 00:25:40.288 Identify (06h): Supported 00:25:40.288 Abort (08h): Supported 00:25:40.288 Set Features (09h): Supported 00:25:40.288 Get Features (0Ah): Supported 00:25:40.288 Asynchronous Event Request (0Ch): Supported 00:25:40.288 Keep Alive (18h): Supported 00:25:40.288 I/O Commands 00:25:40.288 ------------ 00:25:40.288 Flush (00h): Supported 00:25:40.288 Write (01h): Supported LBA-Change 00:25:40.288 Read (02h): Supported 00:25:40.288 Write Zeroes (08h): Supported LBA-Change 00:25:40.288 Dataset Management (09h): Supported 00:25:40.288 00:25:40.288 Error Log 00:25:40.288 ========= 00:25:40.288 Entry: 0 00:25:40.288 Error Count: 0x3 00:25:40.288 Submission Queue Id: 0x0 00:25:40.288 Command Id: 0x5 00:25:40.288 Phase Bit: 0 00:25:40.288 Status Code: 0x2 00:25:40.288 Status Code Type: 0x0 00:25:40.288 Do Not Retry: 1 00:25:40.288 Error Location: 0x28 00:25:40.288 LBA: 0x0 00:25:40.288 Namespace: 0x0 00:25:40.288 Vendor Log Page: 0x0 00:25:40.288 ----------- 00:25:40.288 Entry: 1 00:25:40.288 Error Count: 0x2 00:25:40.288 Submission Queue Id: 0x0 00:25:40.288 Command Id: 0x5 00:25:40.288 Phase Bit: 0 00:25:40.288 Status Code: 0x2 00:25:40.288 Status Code Type: 0x0 00:25:40.288 Do Not Retry: 1 00:25:40.288 Error Location: 0x28 00:25:40.288 LBA: 0x0 00:25:40.288 Namespace: 0x0 00:25:40.288 Vendor Log Page: 0x0 00:25:40.288 ----------- 00:25:40.288 Entry: 2 00:25:40.288 Error Count: 0x1 00:25:40.288 Submission Queue Id: 0x0 00:25:40.288 Command Id: 0x4 00:25:40.288 Phase Bit: 0 00:25:40.288 Status Code: 0x2 00:25:40.288 Status Code Type: 0x0 00:25:40.288 Do Not Retry: 1 00:25:40.288 Error Location: 0x28 00:25:40.288 LBA: 0x0 00:25:40.288 Namespace: 0x0 00:25:40.288 Vendor Log Page: 0x0 00:25:40.288 00:25:40.288 Number of Queues 00:25:40.288 ================ 00:25:40.288 Number of I/O Submission Queues: 128 00:25:40.288 Number of I/O Completion Queues: 128 00:25:40.288 00:25:40.288 ZNS Specific Controller Data 00:25:40.288 ============================ 00:25:40.288 Zone Append Size Limit: 0 00:25:40.288 00:25:40.288 00:25:40.288 Active Namespaces 00:25:40.288 ================= 00:25:40.288 get_feature(0x05) failed 00:25:40.288 Namespace ID:1 00:25:40.288 Command Set Identifier: NVM (00h) 00:25:40.288 Deallocate: Supported 00:25:40.288 Deallocated/Unwritten Error: Not Supported 00:25:40.289 Deallocated Read Value: Unknown 00:25:40.289 Deallocate in Write Zeroes: Not Supported 00:25:40.289 Deallocated Guard Field: 0xFFFF 00:25:40.289 Flush: Supported 00:25:40.289 Reservation: Not Supported 00:25:40.289 Namespace Sharing Capabilities: Multiple Controllers 00:25:40.289 Size (in LBAs): 4194304 (2GiB) 00:25:40.289 Capacity (in LBAs): 4194304 (2GiB) 00:25:40.289 Utilization (in LBAs): 4194304 (2GiB) 00:25:40.289 UUID: 5afaf48f-7894-450c-b8b8-44d5133c5d30 00:25:40.289 Thin Provisioning: Not Supported 00:25:40.289 Per-NS Atomic Units: Yes 00:25:40.289 Atomic Boundary Size (Normal): 0 00:25:40.289 Atomic Boundary Size (PFail): 0 00:25:40.289 Atomic Boundary Offset: 0 00:25:40.289 NGUID/EUI64 Never Reused: No 00:25:40.289 ANA group ID: 1 00:25:40.289 Namespace Write Protected: No 00:25:40.289 Number of LBA Formats: 1 00:25:40.289 Current LBA Format: LBA Format #00 00:25:40.289 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:40.289 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:40.289 rmmod nvme_tcp 00:25:40.289 rmmod nvme_fabrics 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.289 15:58:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.822 15:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:42.822 15:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:42.822 15:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:42.822 15:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:42.822 15:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:42.822 15:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:42.822 15:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:42.822 15:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:42.822 15:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:42.822 15:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:42.822 15:58:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:45.358 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:25:45.358 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:45.358 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:45.358 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:45.358 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:45.358 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:45.358 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:45.358 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:45.618 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:45.618 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:45.618 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:45.618 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:45.618 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:45.618 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:45.618 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:45.618 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:45.618 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:46.554 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:46.554 00:25:46.554 real 0m17.107s 00:25:46.554 user 0m4.582s 00:25:46.554 sys 0m8.869s 00:25:46.554 15:58:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:46.554 15:58:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:46.554 ************************************ 00:25:46.554 END TEST nvmf_identify_kernel_target 00:25:46.554 ************************************ 00:25:46.554 15:58:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:46.554 15:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:46.554 15:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:46.554 15:58:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.554 ************************************ 00:25:46.554 START TEST nvmf_auth_host 00:25:46.554 ************************************ 00:25:46.554 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:46.814 * Looking for test storage... 00:25:46.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:46.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.814 --rc genhtml_branch_coverage=1 00:25:46.814 --rc genhtml_function_coverage=1 00:25:46.814 --rc genhtml_legend=1 00:25:46.814 --rc geninfo_all_blocks=1 00:25:46.814 --rc geninfo_unexecuted_blocks=1 00:25:46.814 00:25:46.814 ' 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:46.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.814 --rc genhtml_branch_coverage=1 00:25:46.814 --rc genhtml_function_coverage=1 00:25:46.814 --rc genhtml_legend=1 00:25:46.814 --rc geninfo_all_blocks=1 00:25:46.814 --rc geninfo_unexecuted_blocks=1 00:25:46.814 00:25:46.814 ' 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:46.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.814 --rc genhtml_branch_coverage=1 00:25:46.814 --rc genhtml_function_coverage=1 00:25:46.814 --rc genhtml_legend=1 00:25:46.814 --rc geninfo_all_blocks=1 00:25:46.814 --rc geninfo_unexecuted_blocks=1 00:25:46.814 00:25:46.814 ' 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:46.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.814 --rc genhtml_branch_coverage=1 00:25:46.814 --rc genhtml_function_coverage=1 00:25:46.814 --rc genhtml_legend=1 00:25:46.814 --rc geninfo_all_blocks=1 00:25:46.814 --rc geninfo_unexecuted_blocks=1 00:25:46.814 00:25:46.814 ' 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.814 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:46.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:46.815 15:58:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:53.381 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:53.382 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:53.382 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:53.382 Found net devices under 0000:af:00.0: cvl_0_0 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:53.382 Found net devices under 0000:af:00.1: cvl_0_1 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:53.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:25:53.382 00:25:53.382 --- 10.0.0.2 ping statistics --- 00:25:53.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.382 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:25:53.382 00:25:53.382 --- 10.0.0.1 ping statistics --- 00:25:53.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.382 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2128947 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2128947 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2128947 ']' 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.382 15:58:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.641 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0c91d3eaeb5a1b0ef0ee45c9102e6125 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Xj2 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0c91d3eaeb5a1b0ef0ee45c9102e6125 0 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0c91d3eaeb5a1b0ef0ee45c9102e6125 0 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0c91d3eaeb5a1b0ef0ee45c9102e6125 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Xj2 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Xj2 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Xj2 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=83c42f88d3bebb97962db0154c87b3059d8dc041099f9aa19d46217c84c9c0fb 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.SIn 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 83c42f88d3bebb97962db0154c87b3059d8dc041099f9aa19d46217c84c9c0fb 3 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 83c42f88d3bebb97962db0154c87b3059d8dc041099f9aa19d46217c84c9c0fb 3 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=83c42f88d3bebb97962db0154c87b3059d8dc041099f9aa19d46217c84c9c0fb 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.SIn 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.SIn 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.SIn 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cad11028b890ec4fdbd6b7b18dbdfa90f2ea4bc95d2fad47 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.diK 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cad11028b890ec4fdbd6b7b18dbdfa90f2ea4bc95d2fad47 0 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cad11028b890ec4fdbd6b7b18dbdfa90f2ea4bc95d2fad47 0 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cad11028b890ec4fdbd6b7b18dbdfa90f2ea4bc95d2fad47 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:53.642 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.diK 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.diK 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.diK 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bfdc32366b70c564d95edd7fa72045bfa4c5ed94d1511b93 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lRM 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bfdc32366b70c564d95edd7fa72045bfa4c5ed94d1511b93 2 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bfdc32366b70c564d95edd7fa72045bfa4c5ed94d1511b93 2 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:53.901 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bfdc32366b70c564d95edd7fa72045bfa4c5ed94d1511b93 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lRM 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lRM 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.lRM 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=69aef40087967630f4594198700aaced 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.DJk 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 69aef40087967630f4594198700aaced 1 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 69aef40087967630f4594198700aaced 1 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=69aef40087967630f4594198700aaced 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:53.902 15:58:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.DJk 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.DJk 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.DJk 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=28552445e22c5baab47f710f4f13a1ee 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.H0T 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 28552445e22c5baab47f710f4f13a1ee 1 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 28552445e22c5baab47f710f4f13a1ee 1 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=28552445e22c5baab47f710f4f13a1ee 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.H0T 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.H0T 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.H0T 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a2b0cb633c7085aac42a15eb09b81c01e1d5fc107d9c33cd 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.jos 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a2b0cb633c7085aac42a15eb09b81c01e1d5fc107d9c33cd 2 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a2b0cb633c7085aac42a15eb09b81c01e1d5fc107d9c33cd 2 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a2b0cb633c7085aac42a15eb09b81c01e1d5fc107d9c33cd 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:53.902 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.jos 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.jos 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.jos 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a0451aa626fc252790ce0172188db6a9 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.aHn 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a0451aa626fc252790ce0172188db6a9 0 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a0451aa626fc252790ce0172188db6a9 0 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a0451aa626fc252790ce0172188db6a9 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.aHn 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.aHn 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.aHn 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b4ca19a5f52256030a75f448a09e8e038313f6dd457ff09cbbbba17b0eb55817 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.0Qr 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b4ca19a5f52256030a75f448a09e8e038313f6dd457ff09cbbbba17b0eb55817 3 00:25:54.161 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b4ca19a5f52256030a75f448a09e8e038313f6dd457ff09cbbbba17b0eb55817 3 00:25:54.162 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.162 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.162 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b4ca19a5f52256030a75f448a09e8e038313f6dd457ff09cbbbba17b0eb55817 00:25:54.162 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:54.162 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.162 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.0Qr 00:25:54.162 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.0Qr 00:25:54.162 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.0Qr 00:25:54.162 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:54.162 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2128947 00:25:54.162 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2128947 ']' 00:25:54.162 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.162 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.162 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.162 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.162 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Xj2 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.SIn ]] 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SIn 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.diK 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.lRM ]] 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lRM 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.DJk 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.H0T ]] 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.H0T 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.jos 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.aHn ]] 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.aHn 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.0Qr 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:54.421 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:54.422 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:54.422 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:54.422 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:54.422 15:58:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:56.955 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:25:57.214 Waiting for block devices as requested 00:25:57.214 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:57.472 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:57.472 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:57.472 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:57.731 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:57.731 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:57.731 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:57.731 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:57.990 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:57.990 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:57.990 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:58.249 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:58.249 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:58.249 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:58.249 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:58.507 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:58.507 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:59.075 No valid GPT data, bailing 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:25:59.075 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:25:59.334 No valid GPT data, bailing 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # continue 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:59.334 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:59.334 00:25:59.334 Discovery Log Number of Records 2, Generation counter 2 00:25:59.334 =====Discovery Log Entry 0====== 00:25:59.334 trtype: tcp 00:25:59.334 adrfam: ipv4 00:25:59.334 subtype: current discovery subsystem 00:25:59.334 treq: not specified, sq flow control disable supported 00:25:59.334 portid: 1 00:25:59.334 trsvcid: 4420 00:25:59.334 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:59.334 traddr: 10.0.0.1 00:25:59.334 eflags: none 00:25:59.334 sectype: none 00:25:59.334 =====Discovery Log Entry 1====== 00:25:59.334 trtype: tcp 00:25:59.334 adrfam: ipv4 00:25:59.334 subtype: nvme subsystem 00:25:59.334 treq: not specified, sq flow control disable supported 00:25:59.334 portid: 1 00:25:59.334 trsvcid: 4420 00:25:59.334 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:59.334 traddr: 10.0.0.1 00:25:59.334 eflags: none 00:25:59.334 sectype: none 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.335 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.594 nvme0n1 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: ]] 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.594 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.853 nvme0n1 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.853 15:58:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.111 nvme0n1 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: ]] 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.111 nvme0n1 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.111 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: ]] 00:26:00.369 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.370 nvme0n1 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.370 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.629 nvme0n1 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: ]] 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.629 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.630 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.630 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:00.630 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.630 15:58:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.888 nvme0n1 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:00.888 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.889 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.147 nvme0n1 00:26:01.147 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.147 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.147 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.147 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.147 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.147 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.147 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.147 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.147 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.147 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.147 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.147 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: ]] 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.148 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.407 nvme0n1 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: ]] 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.407 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.666 nvme0n1 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.666 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.925 nvme0n1 00:26:01.925 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.925 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.925 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.925 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.925 15:58:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: ]] 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.925 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.926 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.185 nvme0n1 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.185 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.444 nvme0n1 00:26:02.444 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.444 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.444 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.444 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.444 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.444 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: ]] 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.703 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.962 nvme0n1 00:26:02.962 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.962 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.962 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.962 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.962 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.962 15:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: ]] 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.962 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.221 nvme0n1 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:03.221 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.222 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.480 nvme0n1 00:26:03.480 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.480 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.480 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.480 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.480 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.480 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.480 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: ]] 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.481 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.740 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.740 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.740 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.740 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.740 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.740 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.740 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.740 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.740 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.740 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.740 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.740 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.740 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:03.740 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.740 15:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.999 nvme0n1 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:03.999 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.000 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.635 nvme0n1 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: ]] 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.635 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.895 nvme0n1 00:26:04.895 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.895 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.895 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.895 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.895 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.895 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.895 15:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: ]] 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.895 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.464 nvme0n1 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.464 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.465 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.724 nvme0n1 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.724 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:05.725 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:05.725 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: ]] 00:26:05.725 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:05.725 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:05.725 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.725 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.725 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:05.725 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.725 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.725 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:05.725 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.725 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.985 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.985 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.985 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.985 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.985 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.985 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.985 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.985 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.985 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.985 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.985 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.985 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.985 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.985 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.985 15:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.553 nvme0n1 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.553 15:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.120 nvme0n1 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: ]] 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.120 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.688 nvme0n1 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: ]] 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.688 15:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.255 nvme0n1 00:26:08.255 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.255 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.255 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.255 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.255 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.255 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.514 15:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.082 nvme0n1 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: ]] 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.082 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.083 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.083 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.083 nvme0n1 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:09.341 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.342 nvme0n1 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.342 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: ]] 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.601 nvme0n1 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.601 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.602 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:09.602 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: ]] 00:26:09.602 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:09.602 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:09.602 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.602 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.602 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.602 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.602 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.602 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:09.602 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.602 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.861 nvme0n1 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.861 15:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.861 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.121 nvme0n1 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: ]] 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.121 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.380 nvme0n1 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.380 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.639 nvme0n1 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: ]] 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.639 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.898 nvme0n1 00:26:10.898 15:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: ]] 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.898 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.157 nvme0n1 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:11.157 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.158 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.417 nvme0n1 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: ]] 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.417 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.676 nvme0n1 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.676 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.934 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.934 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.934 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.934 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.934 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.934 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.934 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.934 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.934 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.934 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.934 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.935 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.935 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:11.935 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.935 15:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.935 nvme0n1 00:26:11.935 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.935 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.935 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.935 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.935 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.935 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: ]] 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.193 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.452 nvme0n1 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: ]] 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.452 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.711 nvme0n1 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.711 15:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.970 nvme0n1 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: ]] 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.970 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.538 nvme0n1 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.538 15:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.796 nvme0n1 00:26:13.796 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.796 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.796 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.796 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.796 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: ]] 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.055 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.314 nvme0n1 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: ]] 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.314 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.881 nvme0n1 00:26:14.881 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.881 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.881 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.881 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.881 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.881 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.881 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.881 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.881 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.881 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.881 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.881 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.881 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:14.881 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.881 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.882 15:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.140 nvme0n1 00:26:15.140 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.140 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.140 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.140 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.140 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.140 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: ]] 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.399 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.966 nvme0n1 00:26:15.966 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.967 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.967 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.967 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.967 15:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.967 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.534 nvme0n1 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: ]] 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.534 15:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.101 nvme0n1 00:26:17.101 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.101 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.101 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.101 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.101 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.101 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.101 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.101 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.101 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.101 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: ]] 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.360 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.928 nvme0n1 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.928 15:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.496 nvme0n1 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: ]] 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.496 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.755 nvme0n1 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:18.755 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.756 15:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.014 nvme0n1 00:26:19.014 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.014 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.014 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.014 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.014 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.014 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.014 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.014 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.014 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.014 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.014 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: ]] 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.015 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.274 nvme0n1 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: ]] 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.274 nvme0n1 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.274 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.534 nvme0n1 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:19.534 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.535 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:19.535 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:19.535 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: ]] 00:26:19.535 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.794 nvme0n1 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.794 15:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.794 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.794 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.794 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.794 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.054 nvme0n1 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:20.054 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: ]] 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.313 nvme0n1 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:20.313 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: ]] 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.571 nvme0n1 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.571 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.572 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.831 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.831 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.831 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.831 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.831 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.831 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.831 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.831 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.831 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.831 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.831 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.831 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.831 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:20.831 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.831 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.831 nvme0n1 00:26:20.831 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.832 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.832 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.832 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.832 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.832 15:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: ]] 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.832 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.092 nvme0n1 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.092 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.351 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.610 nvme0n1 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: ]] 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.610 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.869 nvme0n1 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: ]] 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:21.869 15:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.869 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.128 nvme0n1 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.128 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.387 nvme0n1 00:26:22.387 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.387 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.387 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.387 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.387 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.387 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: ]] 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.646 15:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.905 nvme0n1 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.905 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.163 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.163 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.163 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.163 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.422 nvme0n1 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: ]] 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.422 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.990 nvme0n1 00:26:23.990 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.990 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.990 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.990 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.990 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.990 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.990 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.990 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.990 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.990 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.990 15:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: ]] 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.990 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.249 nvme0n1 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.249 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.817 nvme0n1 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGM5MWQzZWFlYjVhMWIwZWYwZWU0NWM5MTAyZTYxMjWxaMJm: 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: ]] 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODNjNDJmODhkM2JlYmI5Nzk2MmRiMDE1NGM4N2IzMDU5ZDhkYzA0MTA5OWY5YWExOWQ0NjIxN2M4NGM5YzBmYmMUupg=: 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.817 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.818 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.818 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.818 15:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.386 nvme0n1 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.386 15:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.953 nvme0n1 00:26:25.953 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.953 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.953 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.953 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.953 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.953 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.953 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.953 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.953 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.953 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: ]] 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.211 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.778 nvme0n1 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTJiMGNiNjMzYzcwODVhYWM0MmExNWViMDliODFjMDFlMWQ1ZmMxMDdkOWMzM2NkMLAkvA==: 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: ]] 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTA0NTFhYTYyNmZjMjUyNzkwY2UwMTcyMTg4ZGI2YTmXm55D: 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.778 15:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.346 nvme0n1 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRjYTE5YTVmNTIyNTYwMzBhNzVmNDQ4YTA5ZThlMDM4MzEzZjZkZDQ1N2ZmMDljYmJiYmExN2IwZWI1NTgxN1T7OXk=: 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.346 15:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.914 nvme0n1 00:26:27.914 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.914 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.914 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.914 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.914 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.914 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.914 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.914 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.914 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.914 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.173 request: 00:26:28.173 { 00:26:28.173 "name": "nvme0", 00:26:28.173 "trtype": "tcp", 00:26:28.173 "traddr": "10.0.0.1", 00:26:28.173 "adrfam": "ipv4", 00:26:28.173 "trsvcid": "4420", 00:26:28.173 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:28.173 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:28.173 "prchk_reftag": false, 00:26:28.173 "prchk_guard": false, 00:26:28.173 "hdgst": false, 00:26:28.173 "ddgst": false, 00:26:28.173 "allow_unrecognized_csi": false, 00:26:28.173 "method": "bdev_nvme_attach_controller", 00:26:28.173 "req_id": 1 00:26:28.173 } 00:26:28.173 Got JSON-RPC error response 00:26:28.173 response: 00:26:28.173 { 00:26:28.173 "code": -5, 00:26:28.173 "message": "Input/output error" 00:26:28.173 } 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.173 request: 00:26:28.173 { 00:26:28.173 "name": "nvme0", 00:26:28.173 "trtype": "tcp", 00:26:28.173 "traddr": "10.0.0.1", 00:26:28.173 "adrfam": "ipv4", 00:26:28.173 "trsvcid": "4420", 00:26:28.173 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:28.173 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:28.173 "prchk_reftag": false, 00:26:28.173 "prchk_guard": false, 00:26:28.173 "hdgst": false, 00:26:28.173 "ddgst": false, 00:26:28.173 "dhchap_key": "key2", 00:26:28.173 "allow_unrecognized_csi": false, 00:26:28.173 "method": "bdev_nvme_attach_controller", 00:26:28.173 "req_id": 1 00:26:28.173 } 00:26:28.173 Got JSON-RPC error response 00:26:28.173 response: 00:26:28.173 { 00:26:28.173 "code": -5, 00:26:28.173 "message": "Input/output error" 00:26:28.173 } 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.173 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.433 request: 00:26:28.433 { 00:26:28.433 "name": "nvme0", 00:26:28.433 "trtype": "tcp", 00:26:28.433 "traddr": "10.0.0.1", 00:26:28.433 "adrfam": "ipv4", 00:26:28.433 "trsvcid": "4420", 00:26:28.433 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:28.433 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:28.433 "prchk_reftag": false, 00:26:28.433 "prchk_guard": false, 00:26:28.433 "hdgst": false, 00:26:28.433 "ddgst": false, 00:26:28.433 "dhchap_key": "key1", 00:26:28.433 "dhchap_ctrlr_key": "ckey2", 00:26:28.433 "allow_unrecognized_csi": false, 00:26:28.433 "method": "bdev_nvme_attach_controller", 00:26:28.433 "req_id": 1 00:26:28.433 } 00:26:28.433 Got JSON-RPC error response 00:26:28.433 response: 00:26:28.433 { 00:26:28.433 "code": -5, 00:26:28.433 "message": "Input/output error" 00:26:28.433 } 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.433 nvme0n1 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: ]] 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.433 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.692 request: 00:26:28.692 { 00:26:28.692 "name": "nvme0", 00:26:28.692 "dhchap_key": "key1", 00:26:28.692 "dhchap_ctrlr_key": "ckey2", 00:26:28.692 "method": "bdev_nvme_set_keys", 00:26:28.692 "req_id": 1 00:26:28.692 } 00:26:28.692 Got JSON-RPC error response 00:26:28.692 response: 00:26:28.692 { 00:26:28.692 "code": -13, 00:26:28.692 "message": "Permission denied" 00:26:28.692 } 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:28.692 15:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:30.069 15:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.069 15:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:30.069 15:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.069 15:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.069 15:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.069 15:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:30.069 15:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FkMTEwMjhiODkwZWM0ZmRiZDZiN2IxOGRiZGZhOTBmMmVhNGJjOTVkMmZhZDQ3yryl/A==: 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: ]] 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZkYzMyMzY2YjcwYzU2NGQ5NWVkZDdmYTcyMDQ1YmZhNGM1ZWQ5NGQxNTExYjkzuiNCvQ==: 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.005 15:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.005 nvme0n1 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjlhZWY0MDA4Nzk2NzYzMGY0NTk0MTk4NzAwYWFjZWQGjOlW: 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: ]] 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mjg1NTI0NDVlMjJjNWJhYWI0N2Y3MTBmNGYxM2ExZWUfdI5a: 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.005 request: 00:26:31.005 { 00:26:31.005 "name": "nvme0", 00:26:31.005 "dhchap_key": "key2", 00:26:31.005 "dhchap_ctrlr_key": "ckey1", 00:26:31.005 "method": "bdev_nvme_set_keys", 00:26:31.005 "req_id": 1 00:26:31.005 } 00:26:31.005 Got JSON-RPC error response 00:26:31.005 response: 00:26:31.005 { 00:26:31.005 "code": -13, 00:26:31.005 "message": "Permission denied" 00:26:31.005 } 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:31.005 15:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:32.383 rmmod nvme_tcp 00:26:32.383 rmmod nvme_fabrics 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2128947 ']' 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2128947 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2128947 ']' 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2128947 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2128947 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2128947' 00:26:32.383 killing process with pid 2128947 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2128947 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2128947 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.383 15:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.919 15:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:34.919 15:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:34.919 15:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:34.919 15:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:34.919 15:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:34.919 15:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:34.919 15:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:34.919 15:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:34.919 15:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:34.919 15:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:34.919 15:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:34.919 15:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:34.919 15:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:37.002 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:26:37.569 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:37.569 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:37.569 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:37.569 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:37.569 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:37.569 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:37.569 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:37.569 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:37.569 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:37.569 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:37.569 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:37.569 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:37.569 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:37.569 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:37.569 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:37.569 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:38.506 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:38.506 15:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Xj2 /tmp/spdk.key-null.diK /tmp/spdk.key-sha256.DJk /tmp/spdk.key-sha384.jos /tmp/spdk.key-sha512.0Qr /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:38.506 15:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:41.041 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:26:41.300 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:41.300 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:41.300 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:41.300 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:41.559 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:41.559 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:41.559 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:41.559 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:41.559 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:41.559 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:41.559 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:41.559 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:41.559 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:41.559 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:41.559 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:41.559 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:41.559 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:41.559 00:26:41.559 real 0m54.988s 00:26:41.559 user 0m49.875s 00:26:41.559 sys 0m12.932s 00:26:41.559 15:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:41.559 15:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.559 ************************************ 00:26:41.559 END TEST nvmf_auth_host 00:26:41.559 ************************************ 00:26:41.559 15:59:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:41.559 15:59:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:41.559 15:59:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:41.559 15:59:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:41.559 15:59:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.559 ************************************ 00:26:41.559 START TEST nvmf_digest 00:26:41.559 ************************************ 00:26:41.559 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:41.818 * Looking for test storage... 00:26:41.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.818 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:41.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.818 --rc genhtml_branch_coverage=1 00:26:41.818 --rc genhtml_function_coverage=1 00:26:41.818 --rc genhtml_legend=1 00:26:41.819 --rc geninfo_all_blocks=1 00:26:41.819 --rc geninfo_unexecuted_blocks=1 00:26:41.819 00:26:41.819 ' 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:41.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.819 --rc genhtml_branch_coverage=1 00:26:41.819 --rc genhtml_function_coverage=1 00:26:41.819 --rc genhtml_legend=1 00:26:41.819 --rc geninfo_all_blocks=1 00:26:41.819 --rc geninfo_unexecuted_blocks=1 00:26:41.819 00:26:41.819 ' 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:41.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.819 --rc genhtml_branch_coverage=1 00:26:41.819 --rc genhtml_function_coverage=1 00:26:41.819 --rc genhtml_legend=1 00:26:41.819 --rc geninfo_all_blocks=1 00:26:41.819 --rc geninfo_unexecuted_blocks=1 00:26:41.819 00:26:41.819 ' 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:41.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.819 --rc genhtml_branch_coverage=1 00:26:41.819 --rc genhtml_function_coverage=1 00:26:41.819 --rc genhtml_legend=1 00:26:41.819 --rc geninfo_all_blocks=1 00:26:41.819 --rc geninfo_unexecuted_blocks=1 00:26:41.819 00:26:41.819 ' 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:41.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:41.819 15:59:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:48.390 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:48.390 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:48.390 Found net devices under 0000:af:00.0: cvl_0_0 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:48.390 Found net devices under 0000:af:00.1: cvl_0_1 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:48.390 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:48.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:26:48.391 00:26:48.391 --- 10.0.0.2 ping statistics --- 00:26:48.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.391 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:26:48.391 00:26:48.391 --- 10.0.0.1 ping statistics --- 00:26:48.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.391 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:48.391 ************************************ 00:26:48.391 START TEST nvmf_digest_clean 00:26:48.391 ************************************ 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2142890 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2142890 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2142890 ']' 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.391 15:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.391 [2024-12-09 15:59:42.978700] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:26:48.391 [2024-12-09 15:59:42.978740] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.391 [2024-12-09 15:59:43.053815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.391 [2024-12-09 15:59:43.093859] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.391 [2024-12-09 15:59:43.093893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.391 [2024-12-09 15:59:43.093900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.391 [2024-12-09 15:59:43.093906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.391 [2024-12-09 15:59:43.093914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.391 [2024-12-09 15:59:43.094457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.391 null0 00:26:48.391 [2024-12-09 15:59:43.255039] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.391 [2024-12-09 15:59:43.279237] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2142910 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2142910 /var/tmp/bperf.sock 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2142910 ']' 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:48.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.391 [2024-12-09 15:59:43.331871] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:26:48.391 [2024-12-09 15:59:43.331909] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142910 ] 00:26:48.391 [2024-12-09 15:59:43.405139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.391 [2024-12-09 15:59:43.443814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:48.391 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:48.651 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.651 15:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.909 nvme0n1 00:26:48.909 15:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:48.910 15:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:49.168 Running I/O for 2 seconds... 00:26:51.041 24956.00 IOPS, 97.48 MiB/s [2024-12-09T14:59:46.269Z] 24740.00 IOPS, 96.64 MiB/s 00:26:51.041 Latency(us) 00:26:51.041 [2024-12-09T14:59:46.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.041 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:51.041 nvme0n1 : 2.01 24731.99 96.61 0.00 0.00 5170.41 2777.48 16103.13 00:26:51.041 [2024-12-09T14:59:46.269Z] =================================================================================================================== 00:26:51.041 [2024-12-09T14:59:46.269Z] Total : 24731.99 96.61 0.00 0.00 5170.41 2777.48 16103.13 00:26:51.041 { 00:26:51.041 "results": [ 00:26:51.041 { 00:26:51.041 "job": "nvme0n1", 00:26:51.041 "core_mask": "0x2", 00:26:51.041 "workload": "randread", 00:26:51.041 "status": "finished", 00:26:51.041 "queue_depth": 128, 00:26:51.041 "io_size": 4096, 00:26:51.041 "runtime": 2.005823, 00:26:51.041 "iops": 24731.9928029542, 00:26:51.041 "mibps": 96.60934688653984, 00:26:51.041 "io_failed": 0, 00:26:51.041 "io_timeout": 0, 00:26:51.041 "avg_latency_us": 5170.405230492777, 00:26:51.041 "min_latency_us": 2777.478095238095, 00:26:51.041 "max_latency_us": 16103.131428571429 00:26:51.041 } 00:26:51.041 ], 00:26:51.041 "core_count": 1 00:26:51.041 } 00:26:51.041 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:51.041 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:51.041 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:51.041 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:51.041 | select(.opcode=="crc32c") 00:26:51.041 | "\(.module_name) \(.executed)"' 00:26:51.041 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:51.300 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:51.300 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:51.300 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:51.300 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:51.301 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2142910 00:26:51.301 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2142910 ']' 00:26:51.301 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2142910 00:26:51.301 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:51.301 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:51.301 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2142910 00:26:51.301 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:51.301 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:51.301 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2142910' 00:26:51.301 killing process with pid 2142910 00:26:51.301 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2142910 00:26:51.301 Received shutdown signal, test time was about 2.000000 seconds 00:26:51.301 00:26:51.301 Latency(us) 00:26:51.301 [2024-12-09T14:59:46.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.301 [2024-12-09T14:59:46.529Z] =================================================================================================================== 00:26:51.301 [2024-12-09T14:59:46.529Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:51.301 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2142910 00:26:51.560 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:51.560 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:51.560 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:51.560 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:51.560 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:51.560 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:51.560 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:51.560 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2143451 00:26:51.560 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2143451 /var/tmp/bperf.sock 00:26:51.560 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:51.560 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2143451 ']' 00:26:51.560 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:51.560 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:51.560 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:51.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:51.560 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:51.560 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:51.560 [2024-12-09 15:59:46.665482] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:26:51.560 [2024-12-09 15:59:46.665531] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2143451 ] 00:26:51.560 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:51.560 Zero copy mechanism will not be used. 00:26:51.560 [2024-12-09 15:59:46.739618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.560 [2024-12-09 15:59:46.778830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.819 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:51.819 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:51.819 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:51.819 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:51.819 15:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:52.078 15:59:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:52.078 15:59:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:52.337 nvme0n1 00:26:52.337 15:59:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:52.337 15:59:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:52.337 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:52.337 Zero copy mechanism will not be used. 00:26:52.337 Running I/O for 2 seconds... 00:26:54.651 5965.00 IOPS, 745.62 MiB/s [2024-12-09T14:59:49.879Z] 5974.50 IOPS, 746.81 MiB/s 00:26:54.651 Latency(us) 00:26:54.651 [2024-12-09T14:59:49.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.651 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:54.651 nvme0n1 : 2.00 5977.83 747.23 0.00 0.00 2673.72 600.75 7895.53 00:26:54.651 [2024-12-09T14:59:49.879Z] =================================================================================================================== 00:26:54.651 [2024-12-09T14:59:49.879Z] Total : 5977.83 747.23 0.00 0.00 2673.72 600.75 7895.53 00:26:54.651 { 00:26:54.651 "results": [ 00:26:54.651 { 00:26:54.651 "job": "nvme0n1", 00:26:54.651 "core_mask": "0x2", 00:26:54.651 "workload": "randread", 00:26:54.651 "status": "finished", 00:26:54.651 "queue_depth": 16, 00:26:54.651 "io_size": 131072, 00:26:54.651 "runtime": 2.004072, 00:26:54.651 "iops": 5977.829139871222, 00:26:54.651 "mibps": 747.2286424839027, 00:26:54.651 "io_failed": 0, 00:26:54.651 "io_timeout": 0, 00:26:54.651 "avg_latency_us": 2673.7195999682012, 00:26:54.651 "min_latency_us": 600.7466666666667, 00:26:54.651 "max_latency_us": 7895.527619047619 00:26:54.651 } 00:26:54.651 ], 00:26:54.651 "core_count": 1 00:26:54.651 } 00:26:54.651 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:54.651 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:54.652 | select(.opcode=="crc32c") 00:26:54.652 | "\(.module_name) \(.executed)"' 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2143451 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2143451 ']' 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2143451 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2143451 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2143451' 00:26:54.652 killing process with pid 2143451 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2143451 00:26:54.652 Received shutdown signal, test time was about 2.000000 seconds 00:26:54.652 00:26:54.652 Latency(us) 00:26:54.652 [2024-12-09T14:59:49.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.652 [2024-12-09T14:59:49.880Z] =================================================================================================================== 00:26:54.652 [2024-12-09T14:59:49.880Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:54.652 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2143451 00:26:54.911 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:54.911 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:54.911 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:54.911 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:54.911 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:54.911 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:54.911 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:54.911 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2144057 00:26:54.911 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2144057 /var/tmp/bperf.sock 00:26:54.911 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:54.911 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2144057 ']' 00:26:54.911 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:54.911 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.911 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:54.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:54.911 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.911 15:59:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:54.911 [2024-12-09 15:59:50.013061] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:26:54.911 [2024-12-09 15:59:50.013112] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2144057 ] 00:26:54.911 [2024-12-09 15:59:50.088118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.911 [2024-12-09 15:59:50.129074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.170 15:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:55.170 15:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:55.170 15:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:55.170 15:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:55.170 15:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:55.429 15:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.429 15:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.688 nvme0n1 00:26:55.688 15:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:55.688 15:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:55.947 Running I/O for 2 seconds... 00:26:57.820 28388.00 IOPS, 110.89 MiB/s [2024-12-09T14:59:53.048Z] 28515.00 IOPS, 111.39 MiB/s 00:26:57.821 Latency(us) 00:26:57.821 [2024-12-09T14:59:53.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.821 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:57.821 nvme0n1 : 2.00 28521.94 111.41 0.00 0.00 4482.08 1911.47 7957.94 00:26:57.821 [2024-12-09T14:59:53.049Z] =================================================================================================================== 00:26:57.821 [2024-12-09T14:59:53.049Z] Total : 28521.94 111.41 0.00 0.00 4482.08 1911.47 7957.94 00:26:57.821 { 00:26:57.821 "results": [ 00:26:57.821 { 00:26:57.821 "job": "nvme0n1", 00:26:57.821 "core_mask": "0x2", 00:26:57.821 "workload": "randwrite", 00:26:57.821 "status": "finished", 00:26:57.821 "queue_depth": 128, 00:26:57.821 "io_size": 4096, 00:26:57.821 "runtime": 2.004001, 00:26:57.821 "iops": 28521.941855318437, 00:26:57.821 "mibps": 111.41383537233764, 00:26:57.821 "io_failed": 0, 00:26:57.821 "io_timeout": 0, 00:26:57.821 "avg_latency_us": 4482.083177024754, 00:26:57.821 "min_latency_us": 1911.4666666666667, 00:26:57.821 "max_latency_us": 7957.942857142857 00:26:57.821 } 00:26:57.821 ], 00:26:57.821 "core_count": 1 00:26:57.821 } 00:26:57.821 15:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:57.821 15:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:57.821 15:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:57.821 15:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:57.821 | select(.opcode=="crc32c") 00:26:57.821 | "\(.module_name) \(.executed)"' 00:26:57.821 15:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:58.080 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:58.080 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:58.080 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:58.080 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:58.080 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2144057 00:26:58.080 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2144057 ']' 00:26:58.080 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2144057 00:26:58.080 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:58.080 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:58.080 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2144057 00:26:58.080 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:58.080 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:58.080 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2144057' 00:26:58.080 killing process with pid 2144057 00:26:58.080 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2144057 00:26:58.080 Received shutdown signal, test time was about 2.000000 seconds 00:26:58.080 00:26:58.080 Latency(us) 00:26:58.080 [2024-12-09T14:59:53.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.080 [2024-12-09T14:59:53.308Z] =================================================================================================================== 00:26:58.080 [2024-12-09T14:59:53.308Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:58.080 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2144057 00:26:58.339 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:58.339 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:58.339 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:58.339 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:58.339 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:58.339 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:58.339 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:58.339 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2144525 00:26:58.339 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2144525 /var/tmp/bperf.sock 00:26:58.339 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:58.339 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2144525 ']' 00:26:58.339 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:58.339 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:58.339 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:58.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:58.339 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:58.339 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:58.339 [2024-12-09 15:59:53.444555] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:26:58.339 [2024-12-09 15:59:53.444605] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2144525 ] 00:26:58.339 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:58.339 Zero copy mechanism will not be used. 00:26:58.339 [2024-12-09 15:59:53.518088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.339 [2024-12-09 15:59:53.557367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.598 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:58.598 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:58.598 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:58.598 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:58.598 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:58.857 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.857 15:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.115 nvme0n1 00:26:59.115 15:59:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:59.115 15:59:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:59.374 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:59.374 Zero copy mechanism will not be used. 00:26:59.374 Running I/O for 2 seconds... 00:27:01.246 6356.00 IOPS, 794.50 MiB/s [2024-12-09T14:59:56.474Z] 6522.00 IOPS, 815.25 MiB/s 00:27:01.246 Latency(us) 00:27:01.246 [2024-12-09T14:59:56.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.246 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:01.246 nvme0n1 : 2.00 6519.25 814.91 0.00 0.00 2449.73 1825.65 7021.71 00:27:01.246 [2024-12-09T14:59:56.474Z] =================================================================================================================== 00:27:01.246 [2024-12-09T14:59:56.474Z] Total : 6519.25 814.91 0.00 0.00 2449.73 1825.65 7021.71 00:27:01.246 { 00:27:01.246 "results": [ 00:27:01.246 { 00:27:01.246 "job": "nvme0n1", 00:27:01.246 "core_mask": "0x2", 00:27:01.246 "workload": "randwrite", 00:27:01.246 "status": "finished", 00:27:01.246 "queue_depth": 16, 00:27:01.246 "io_size": 131072, 00:27:01.246 "runtime": 2.003911, 00:27:01.246 "iops": 6519.251603489377, 00:27:01.246 "mibps": 814.9064504361721, 00:27:01.246 "io_failed": 0, 00:27:01.246 "io_timeout": 0, 00:27:01.246 "avg_latency_us": 2449.7273262764998, 00:27:01.246 "min_latency_us": 1825.6457142857143, 00:27:01.246 "max_latency_us": 7021.714285714285 00:27:01.246 } 00:27:01.246 ], 00:27:01.246 "core_count": 1 00:27:01.246 } 00:27:01.246 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:01.246 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:01.246 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:01.246 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:01.246 | select(.opcode=="crc32c") 00:27:01.246 | "\(.module_name) \(.executed)"' 00:27:01.246 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:01.505 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:01.505 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:01.506 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:01.506 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:01.506 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2144525 00:27:01.506 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2144525 ']' 00:27:01.506 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2144525 00:27:01.506 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:01.506 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:01.506 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2144525 00:27:01.506 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:01.506 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:01.506 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2144525' 00:27:01.506 killing process with pid 2144525 00:27:01.506 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2144525 00:27:01.506 Received shutdown signal, test time was about 2.000000 seconds 00:27:01.506 00:27:01.506 Latency(us) 00:27:01.506 [2024-12-09T14:59:56.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.506 [2024-12-09T14:59:56.734Z] =================================================================================================================== 00:27:01.506 [2024-12-09T14:59:56.734Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:01.506 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2144525 00:27:01.765 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2142890 00:27:01.765 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2142890 ']' 00:27:01.765 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2142890 00:27:01.765 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:01.765 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:01.765 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2142890 00:27:01.765 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:01.765 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:01.765 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2142890' 00:27:01.765 killing process with pid 2142890 00:27:01.765 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2142890 00:27:01.765 15:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2142890 00:27:02.024 00:27:02.024 real 0m14.151s 00:27:02.024 user 0m27.261s 00:27:02.024 sys 0m4.488s 00:27:02.024 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:02.024 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:02.024 ************************************ 00:27:02.024 END TEST nvmf_digest_clean 00:27:02.024 ************************************ 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:02.025 ************************************ 00:27:02.025 START TEST nvmf_digest_error 00:27:02.025 ************************************ 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2145232 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2145232 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2145232 ']' 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.025 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.025 [2024-12-09 15:59:57.200707] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:27:02.025 [2024-12-09 15:59:57.200747] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.284 [2024-12-09 15:59:57.279285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.284 [2024-12-09 15:59:57.318104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.284 [2024-12-09 15:59:57.318137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.284 [2024-12-09 15:59:57.318144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.284 [2024-12-09 15:59:57.318150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.284 [2024-12-09 15:59:57.318155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.284 [2024-12-09 15:59:57.318677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.284 [2024-12-09 15:59:57.383118] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.284 null0 00:27:02.284 [2024-12-09 15:59:57.478590] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.284 [2024-12-09 15:59:57.502779] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2145252 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2145252 /var/tmp/bperf.sock 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2145252 ']' 00:27:02.284 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:02.544 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.544 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:02.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:02.544 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.544 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.544 [2024-12-09 15:59:57.552961] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:27:02.544 [2024-12-09 15:59:57.553004] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145252 ] 00:27:02.544 [2024-12-09 15:59:57.625681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.544 [2024-12-09 15:59:57.666344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.544 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.544 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:02.544 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:02.544 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:02.803 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:02.803 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.803 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.803 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.803 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:02.803 15:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.371 nvme0n1 00:27:03.371 15:59:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:03.371 15:59:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.371 15:59:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.371 15:59:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.371 15:59:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:03.371 15:59:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:03.371 Running I/O for 2 seconds... 00:27:03.371 [2024-12-09 15:59:58.484819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.371 [2024-12-09 15:59:58.484852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.371 [2024-12-09 15:59:58.484863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.371 [2024-12-09 15:59:58.497458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.371 [2024-12-09 15:59:58.497483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.371 [2024-12-09 15:59:58.497492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.371 [2024-12-09 15:59:58.509489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.371 [2024-12-09 15:59:58.509511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.371 [2024-12-09 15:59:58.509519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.371 [2024-12-09 15:59:58.519048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.371 [2024-12-09 15:59:58.519068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.371 [2024-12-09 15:59:58.519077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.371 [2024-12-09 15:59:58.528154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.371 [2024-12-09 15:59:58.528176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.372 [2024-12-09 15:59:58.528184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.372 [2024-12-09 15:59:58.538308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.372 [2024-12-09 15:59:58.538328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.372 [2024-12-09 15:59:58.538336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.372 [2024-12-09 15:59:58.548764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.372 [2024-12-09 15:59:58.548784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.372 [2024-12-09 15:59:58.548792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.372 [2024-12-09 15:59:58.557317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.372 [2024-12-09 15:59:58.557336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.372 [2024-12-09 15:59:58.557344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.372 [2024-12-09 15:59:58.570310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.372 [2024-12-09 15:59:58.570330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.372 [2024-12-09 15:59:58.570339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.372 [2024-12-09 15:59:58.582838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.372 [2024-12-09 15:59:58.582856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.372 [2024-12-09 15:59:58.582864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.372 [2024-12-09 15:59:58.594891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.372 [2024-12-09 15:59:58.594911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.372 [2024-12-09 15:59:58.594920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.631 [2024-12-09 15:59:58.603256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.631 [2024-12-09 15:59:58.603275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-12-09 15:59:58.603283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.631 [2024-12-09 15:59:58.613901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.631 [2024-12-09 15:59:58.613921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-12-09 15:59:58.613933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.631 [2024-12-09 15:59:58.622980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.631 [2024-12-09 15:59:58.623000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-12-09 15:59:58.623008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.631 [2024-12-09 15:59:58.634173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.631 [2024-12-09 15:59:58.634192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-12-09 15:59:58.634200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.631 [2024-12-09 15:59:58.642972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.631 [2024-12-09 15:59:58.642991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-12-09 15:59:58.642999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.631 [2024-12-09 15:59:58.653275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.631 [2024-12-09 15:59:58.653295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-12-09 15:59:58.653302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.631 [2024-12-09 15:59:58.664496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.631 [2024-12-09 15:59:58.664515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-12-09 15:59:58.664523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.631 [2024-12-09 15:59:58.673516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.631 [2024-12-09 15:59:58.673535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-12-09 15:59:58.673542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.631 [2024-12-09 15:59:58.685476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.631 [2024-12-09 15:59:58.685495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.631 [2024-12-09 15:59:58.685503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.631 [2024-12-09 15:59:58.696117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.631 [2024-12-09 15:59:58.696136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-12-09 15:59:58.696144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.632 [2024-12-09 15:59:58.709548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.632 [2024-12-09 15:59:58.709571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-12-09 15:59:58.709579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.632 [2024-12-09 15:59:58.717974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.632 [2024-12-09 15:59:58.717993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-12-09 15:59:58.718001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.632 [2024-12-09 15:59:58.729622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.632 [2024-12-09 15:59:58.729642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-12-09 15:59:58.729651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.632 [2024-12-09 15:59:58.741575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.632 [2024-12-09 15:59:58.741595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-12-09 15:59:58.741603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.632 [2024-12-09 15:59:58.754161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.632 [2024-12-09 15:59:58.754181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-12-09 15:59:58.754190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.632 [2024-12-09 15:59:58.766656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.632 [2024-12-09 15:59:58.766675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-12-09 15:59:58.766684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.632 [2024-12-09 15:59:58.778867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.632 [2024-12-09 15:59:58.778886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-12-09 15:59:58.778894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.632 [2024-12-09 15:59:58.789426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.632 [2024-12-09 15:59:58.789446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-12-09 15:59:58.789453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.632 [2024-12-09 15:59:58.799732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.632 [2024-12-09 15:59:58.799752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-12-09 15:59:58.799760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.632 [2024-12-09 15:59:58.807769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.632 [2024-12-09 15:59:58.807789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-12-09 15:59:58.807797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.632 [2024-12-09 15:59:58.820090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.632 [2024-12-09 15:59:58.820110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-12-09 15:59:58.820118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.632 [2024-12-09 15:59:58.831392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.632 [2024-12-09 15:59:58.831412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-12-09 15:59:58.831419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.632 [2024-12-09 15:59:58.840970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.632 [2024-12-09 15:59:58.840989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-12-09 15:59:58.840997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.632 [2024-12-09 15:59:58.849429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.632 [2024-12-09 15:59:58.849449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.632 [2024-12-09 15:59:58.849457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:58.859516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:58.859537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:58.859545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:58.868816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:58.868837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:58.868845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:58.877974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:58.877993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:58.878001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:58.887156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:58.887176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:58.887187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:58.897918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:58.897937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:58.897945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:58.906392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:58.906412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:58.906419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:58.917708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:58.917728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:58.917736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:58.927061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:58.927080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:58.927087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:58.936341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:58.936360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:58.936367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:58.947868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:58.947887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:58.947895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:58.956300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:58.956320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:58.956328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:58.967281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:58.967301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:58.967309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:58.976009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:58.976029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:58.976037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:58.985236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:58.985256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:58.985263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:58.995095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:58.995115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:58.995122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:59.005485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:59.005504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:59.005512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:59.014542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:59.014561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:59.014569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:59.023921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:59.023940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:59.023948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:59.034006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:59.034025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:59.034033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:59.042242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:59.042261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:59.042269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:59.052462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:59.052481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:59.052493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:59.063070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:59.063089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:59.063097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.892 [2024-12-09 15:59:59.071092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.892 [2024-12-09 15:59:59.071112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.892 [2024-12-09 15:59:59.071120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.893 [2024-12-09 15:59:59.081107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.893 [2024-12-09 15:59:59.081126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.893 [2024-12-09 15:59:59.081133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.893 [2024-12-09 15:59:59.090352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.893 [2024-12-09 15:59:59.090371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.893 [2024-12-09 15:59:59.090379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.893 [2024-12-09 15:59:59.099526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.893 [2024-12-09 15:59:59.099544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.893 [2024-12-09 15:59:59.099552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.893 [2024-12-09 15:59:59.109727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:03.893 [2024-12-09 15:59:59.109747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.893 [2024-12-09 15:59:59.109754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.119590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.119609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.119618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.129165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.129185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.129192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.137576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.137599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.137607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.149602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.149622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.149630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.160460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.160480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.160488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.174143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.174162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.174170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.186434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.186454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.186462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.198588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.198607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.198614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.206867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.206886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.206894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.218659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.218679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.218687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.229133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.229152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.229159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.238290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.238309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.238317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.247162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.247181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.247188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.257402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.257421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.257429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.266416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.266435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.266443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.275689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.275708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.275715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.284865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.284883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.284892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.295033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.295052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.295060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.303312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.303331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.303338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.313924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.313944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.313956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.324964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.324985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.324992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.335357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.335378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.335386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.344177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.344197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.344205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.353597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.353617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.353626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.363892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.363912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.363920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.152 [2024-12-09 15:59:59.372939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.152 [2024-12-09 15:59:59.372959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.152 [2024-12-09 15:59:59.372967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.383834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.383865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.383873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.392244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.392264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.392271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.403891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.403915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.403923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.411649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.411669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.411676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.421389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.421409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.421417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.430979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.430999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.431007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.439757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.439777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.439785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.449254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.449273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.449282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.458788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.458808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.458815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.467051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.467070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.467078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 25090.00 IOPS, 98.01 MiB/s [2024-12-09T14:59:59.639Z] [2024-12-09 15:59:59.479168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.479188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.479200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.487373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.487393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.487401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.496724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.496743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.496752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.505958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.505978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.505986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.515824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.515843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.515851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.525304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.525326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.525334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.534684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.534703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.534711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.543076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.543096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.543104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.552025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.552044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.552052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.563443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.563466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.563474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.572065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.572085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.572093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.583141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.583171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.583180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.593873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.593893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.593901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.605541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.605561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.605568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.614498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.614518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.614525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.625897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.625916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.625924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.411 [2024-12-09 15:59:59.636462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.411 [2024-12-09 15:59:59.636482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.411 [2024-12-09 15:59:59.636490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.670 [2024-12-09 15:59:59.645080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.670 [2024-12-09 15:59:59.645100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.670 [2024-12-09 15:59:59.645107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.670 [2024-12-09 15:59:59.656737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.670 [2024-12-09 15:59:59.656759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.670 [2024-12-09 15:59:59.656767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.670 [2024-12-09 15:59:59.666640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.666660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.666668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.679892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.679912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.679920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.688622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.688642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.688650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.700256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.700276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.700284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.708614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.708633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.708641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.719919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.719939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.719946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.730589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.730608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.730616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.740358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.740377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.740388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.752141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.752161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.752169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.763109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.763128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.763135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.771800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.771819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.771827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.784041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.784061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.784069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.796540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.796560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.796567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.807698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.807717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.807725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.817536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.817554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.817562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.826774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.826793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.826800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.838014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.838033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.838041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.846856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.846877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.846885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.857995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.858015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.858024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.869340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.869359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.869367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.876782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.876801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.876809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.887868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.887888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.887895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.671 [2024-12-09 15:59:59.895498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.671 [2024-12-09 15:59:59.895517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.671 [2024-12-09 15:59:59.895525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 15:59:59.905684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 15:59:59.905703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 15:59:59.905711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 15:59:59.915630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 15:59:59.915648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 15:59:59.915659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 15:59:59.924956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 15:59:59.924975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 15:59:59.924983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 15:59:59.933552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 15:59:59.933571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 15:59:59.933579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 15:59:59.943079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 15:59:59.943098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 15:59:59.943106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 15:59:59.952661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 15:59:59.952680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 15:59:59.952688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 15:59:59.962110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 15:59:59.962130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 15:59:59.962138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 15:59:59.971261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 15:59:59.971281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 15:59:59.971289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 15:59:59.980114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 15:59:59.980134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 15:59:59.980142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 15:59:59.989537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 15:59:59.989558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 15:59:59.989566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 15:59:59.998973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 15:59:59.998997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 15:59:59.999004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 16:00:00.008626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 16:00:00.008648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 16:00:00.008658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 16:00:00.018779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 16:00:00.018800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 16:00:00.018808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 16:00:00.028012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 16:00:00.028033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 16:00:00.028042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 16:00:00.038539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 16:00:00.038560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 16:00:00.038569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 16:00:00.048652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 16:00:00.048672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 16:00:00.048680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 16:00:00.058785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 16:00:00.058804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 16:00:00.058812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 16:00:00.067247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 16:00:00.067267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 16:00:00.067275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 16:00:00.078014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 16:00:00.078034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 16:00:00.078043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 16:00:00.089170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 16:00:00.089190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 16:00:00.089198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 16:00:00.097944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 16:00:00.097964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 16:00:00.097972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 16:00:00.110524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 16:00:00.110543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.932 [2024-12-09 16:00:00.110551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.932 [2024-12-09 16:00:00.121920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.932 [2024-12-09 16:00:00.121940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.933 [2024-12-09 16:00:00.121949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.933 [2024-12-09 16:00:00.134123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.933 [2024-12-09 16:00:00.134144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.933 [2024-12-09 16:00:00.134152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.933 [2024-12-09 16:00:00.145695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.933 [2024-12-09 16:00:00.145715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.933 [2024-12-09 16:00:00.145723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.933 [2024-12-09 16:00:00.154176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:04.933 [2024-12-09 16:00:00.154221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.933 [2024-12-09 16:00:00.154230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.192 [2024-12-09 16:00:00.164798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.192 [2024-12-09 16:00:00.164818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.192 [2024-12-09 16:00:00.164826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.192 [2024-12-09 16:00:00.177102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.192 [2024-12-09 16:00:00.177122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.192 [2024-12-09 16:00:00.177134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.192 [2024-12-09 16:00:00.189692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.192 [2024-12-09 16:00:00.189712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.192 [2024-12-09 16:00:00.189720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.192 [2024-12-09 16:00:00.197909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.192 [2024-12-09 16:00:00.197928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.192 [2024-12-09 16:00:00.197936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.192 [2024-12-09 16:00:00.208899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.192 [2024-12-09 16:00:00.208918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.192 [2024-12-09 16:00:00.208926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.221708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.221727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.221736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.234576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.234596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.234604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.246988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.247008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.247016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.257288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.257307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.257316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.265842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.265862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.265870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.277130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.277149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.277157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.285615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.285634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.285642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.295456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.295476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.295483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.307115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.307135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.307143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.318239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.318258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.318266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.327195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.327214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.327226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.339340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.339359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.339367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.351007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.351027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.351034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.361767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.361787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.361799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.370148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.370169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.370177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.380676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.380696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.380705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.390211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.390235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.390243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.398805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.398825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.398833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.408440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.408460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.408467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.193 [2024-12-09 16:00:00.419625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.193 [2024-12-09 16:00:00.419646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.193 [2024-12-09 16:00:00.419654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.453 [2024-12-09 16:00:00.429880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.453 [2024-12-09 16:00:00.429899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.453 [2024-12-09 16:00:00.429907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.453 [2024-12-09 16:00:00.438902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.453 [2024-12-09 16:00:00.438922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.453 [2024-12-09 16:00:00.438930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.453 [2024-12-09 16:00:00.449335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.453 [2024-12-09 16:00:00.449359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.453 [2024-12-09 16:00:00.449367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.453 [2024-12-09 16:00:00.461843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.453 [2024-12-09 16:00:00.461863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.453 [2024-12-09 16:00:00.461871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.453 [2024-12-09 16:00:00.472787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e1bdd0) 00:27:05.453 [2024-12-09 16:00:00.472806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.453 [2024-12-09 16:00:00.472814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.453 25006.50 IOPS, 97.68 MiB/s 00:27:05.453 Latency(us) 00:27:05.453 [2024-12-09T15:00:00.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.453 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:05.453 nvme0n1 : 2.01 25036.60 97.80 0.00 0.00 5106.12 2496.61 18225.25 00:27:05.453 [2024-12-09T15:00:00.681Z] =================================================================================================================== 00:27:05.453 [2024-12-09T15:00:00.681Z] Total : 25036.60 97.80 0.00 0.00 5106.12 2496.61 18225.25 00:27:05.453 { 00:27:05.453 "results": [ 00:27:05.453 { 00:27:05.453 "job": "nvme0n1", 00:27:05.453 "core_mask": "0x2", 00:27:05.453 "workload": "randread", 00:27:05.453 "status": "finished", 00:27:05.453 "queue_depth": 128, 00:27:05.453 "io_size": 4096, 00:27:05.453 "runtime": 2.005384, 00:27:05.453 "iops": 25036.601468845867, 00:27:05.453 "mibps": 97.79922448767917, 00:27:05.453 "io_failed": 0, 00:27:05.453 "io_timeout": 0, 00:27:05.453 "avg_latency_us": 5106.1165316094575, 00:27:05.453 "min_latency_us": 2496.609523809524, 00:27:05.453 "max_latency_us": 18225.249523809525 00:27:05.453 } 00:27:05.453 ], 00:27:05.453 "core_count": 1 00:27:05.453 } 00:27:05.453 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:05.453 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:05.453 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:05.453 | .driver_specific 00:27:05.453 | .nvme_error 00:27:05.453 | .status_code 00:27:05.453 | .command_transient_transport_error' 00:27:05.453 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:05.712 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 196 > 0 )) 00:27:05.712 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2145252 00:27:05.712 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2145252 ']' 00:27:05.712 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2145252 00:27:05.712 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:05.712 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:05.712 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2145252 00:27:05.712 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2145252' 00:27:05.713 killing process with pid 2145252 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2145252 00:27:05.713 Received shutdown signal, test time was about 2.000000 seconds 00:27:05.713 00:27:05.713 Latency(us) 00:27:05.713 [2024-12-09T15:00:00.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.713 [2024-12-09T15:00:00.941Z] =================================================================================================================== 00:27:05.713 [2024-12-09T15:00:00.941Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2145252 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2145881 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2145881 /var/tmp/bperf.sock 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2145881 ']' 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:05.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:05.713 16:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:05.972 [2024-12-09 16:00:00.964781] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:27:05.972 [2024-12-09 16:00:00.964832] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145881 ] 00:27:05.972 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:05.972 Zero copy mechanism will not be used. 00:27:05.972 [2024-12-09 16:00:01.041805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.972 [2024-12-09 16:00:01.082977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.972 16:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:05.972 16:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:05.972 16:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:05.972 16:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:06.231 16:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:06.231 16:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.231 16:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.231 16:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.231 16:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:06.231 16:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:06.799 nvme0n1 00:27:06.799 16:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:06.799 16:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.799 16:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.799 16:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.799 16:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:06.799 16:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:06.799 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:06.799 Zero copy mechanism will not be used. 00:27:06.799 Running I/O for 2 seconds... 00:27:06.799 [2024-12-09 16:00:01.927213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.799 [2024-12-09 16:00:01.927254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.799 [2024-12-09 16:00:01.927264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.799 [2024-12-09 16:00:01.932562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.799 [2024-12-09 16:00:01.932585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.799 [2024-12-09 16:00:01.932595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.799 [2024-12-09 16:00:01.935341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.799 [2024-12-09 16:00:01.935362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.799 [2024-12-09 16:00:01.935371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.799 [2024-12-09 16:00:01.940488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.799 [2024-12-09 16:00:01.940511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.799 [2024-12-09 16:00:01.940519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.799 [2024-12-09 16:00:01.945706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.799 [2024-12-09 16:00:01.945727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.799 [2024-12-09 16:00:01.945735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.799 [2024-12-09 16:00:01.950905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.799 [2024-12-09 16:00:01.950925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.799 [2024-12-09 16:00:01.950933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.799 [2024-12-09 16:00:01.956173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.799 [2024-12-09 16:00:01.956193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.799 [2024-12-09 16:00:01.956201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.799 [2024-12-09 16:00:01.961307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.799 [2024-12-09 16:00:01.961327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.799 [2024-12-09 16:00:01.961335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.799 [2024-12-09 16:00:01.966493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.799 [2024-12-09 16:00:01.966513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.799 [2024-12-09 16:00:01.966521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.799 [2024-12-09 16:00:01.971641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.799 [2024-12-09 16:00:01.971662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.799 [2024-12-09 16:00:01.971670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.799 [2024-12-09 16:00:01.976808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.799 [2024-12-09 16:00:01.976829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.799 [2024-12-09 16:00:01.976837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.799 [2024-12-09 16:00:01.981977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.799 [2024-12-09 16:00:01.981998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.799 [2024-12-09 16:00:01.982006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.799 [2024-12-09 16:00:01.987106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.799 [2024-12-09 16:00:01.987126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.799 [2024-12-09 16:00:01.987134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.799 [2024-12-09 16:00:01.992246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.799 [2024-12-09 16:00:01.992266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.799 [2024-12-09 16:00:01.992277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.799 [2024-12-09 16:00:01.997428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.799 [2024-12-09 16:00:01.997448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.799 [2024-12-09 16:00:01.997456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.799 [2024-12-09 16:00:02.002620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.799 [2024-12-09 16:00:02.002641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.800 [2024-12-09 16:00:02.002649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.800 [2024-12-09 16:00:02.007849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.800 [2024-12-09 16:00:02.007870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.800 [2024-12-09 16:00:02.007878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.800 [2024-12-09 16:00:02.013043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.800 [2024-12-09 16:00:02.013063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.800 [2024-12-09 16:00:02.013071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.800 [2024-12-09 16:00:02.018110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.800 [2024-12-09 16:00:02.018130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.800 [2024-12-09 16:00:02.018138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.800 [2024-12-09 16:00:02.023274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:06.800 [2024-12-09 16:00:02.023295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.800 [2024-12-09 16:00:02.023302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.028470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.028491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.028499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.033738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.033758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.033766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.038902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.038923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.038931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.044094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.044116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.044123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.049294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.049315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.049323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.054444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.054465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.054473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.059619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.059641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.059649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.064768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.064789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.064798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.069891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.069911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.069920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.074927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.074947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.074956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.079947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.079968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.079979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.085038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.085059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.085067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.090132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.090153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.090161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.095223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.095243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.095251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.100309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.100328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.100337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.105427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.105447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.105455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.110636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.110656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.110664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.115885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.115905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.115912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.121003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.121023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.121031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.126159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.126182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.126190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.131403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.131423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.131432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.136693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.136714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.136725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.141723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.141744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.141752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.146878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.146898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.146906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.151907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.151928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.060 [2024-12-09 16:00:02.151936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.060 [2024-12-09 16:00:02.156997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.060 [2024-12-09 16:00:02.157018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.157025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.162131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.162153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.162160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.167257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.167277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.167285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.172396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.172418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.172426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.177528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.177549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.177557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.182741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.182763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.182772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.187873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.187895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.187903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.193244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.193265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.193273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.198461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.198481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.198489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.203651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.203671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.203679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.208791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.208812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.208821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.213948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.213969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.213980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.219126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.219147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.219155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.225072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.225093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.225101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.230424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.230446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.230454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.235575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.235595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.235603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.240713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.240734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.240742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.245911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.245932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.245940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.251110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.251131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.251139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.256306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.256327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.256336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.261589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.261613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.261621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.266711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.266731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.266739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.271946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.271966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.271974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.277181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.277202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.277210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.061 [2024-12-09 16:00:02.282401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.061 [2024-12-09 16:00:02.282423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.061 [2024-12-09 16:00:02.282431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.287639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.287660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.287668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.292796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.292816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.292824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.298011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.298032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.298039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.303134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.303155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.303162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.308282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.308302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.308310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.313461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.313481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.313489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.318614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.318635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.318642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.323764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.323785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.323793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.328913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.328934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.328942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.334157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.334178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.334186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.339317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.339338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.339346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.344467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.344488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.344496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.349636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.349657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.349668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.354869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.354890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.354897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.359990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.360010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.360018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.365392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.365415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.365423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.370742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.370764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.370772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.375920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.375941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.375948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.381065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.381086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.381094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.386214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.386242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.386249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.391345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.391365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.391373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.396781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.396807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.396815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.402791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.402813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.402822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.322 [2024-12-09 16:00:02.408473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.322 [2024-12-09 16:00:02.408494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.322 [2024-12-09 16:00:02.408502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.415314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.415337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.415344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.422460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.422483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.422492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.429763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.429785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.429794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.437542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.437564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.437573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.444808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.444830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.444839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.452138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.452160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.452168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.459663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.459684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.459692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.466986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.467008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.467016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.474774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.474797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.474805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.481876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.481898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.481906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.489164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.489186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.489195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.496512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.496534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.496542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.503113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.503135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.503144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.509446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.509468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.509476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.514624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.514648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.514656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.519813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.519834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.519842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.526003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.526025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.526033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.533302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.533324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.533332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.540228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.540250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.540257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.323 [2024-12-09 16:00:02.546352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.323 [2024-12-09 16:00:02.546374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.323 [2024-12-09 16:00:02.546382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.553099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.553121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.553128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.558496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.558517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.558525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.563659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.563680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.563688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.568982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.569002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.569010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.574142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.574162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.574170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.579360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.579380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.579389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.584602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.584622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.584630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.589848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.589868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.589875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.595104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.595124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.595132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.600317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.600338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.600345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.605328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.605349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.605357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.609947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.609967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.609978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.613085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.613104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.613111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.618062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.618081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.618088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.622994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.623013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.623021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.627973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.627992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.628002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.632929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.632950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.632958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.637926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.637946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.637953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.643066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.643086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.643094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.648307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.648337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.648345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.653638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.653662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.653671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.658826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.658848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.658856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.664052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.664073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.664081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.669242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.584 [2024-12-09 16:00:02.669262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.584 [2024-12-09 16:00:02.669270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.584 [2024-12-09 16:00:02.674420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.674441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.674448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.679611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.679630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.679638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.684912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.684933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.684943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.689880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.689901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.689909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.695060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.695081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.695089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.700302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.700323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.700332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.705509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.705530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.705537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.710750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.710770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.710778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.715950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.715971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.715978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.721275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.721296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.721303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.726910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.726932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.726940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.732225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.732246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.732254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.737513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.737533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.737541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.742875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.742896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.742907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.748230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.748251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.748259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.753432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.753454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.753462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.758705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.758726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.758734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.764006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.764026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.764034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.769233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.769253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.769261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.774802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.774823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.774831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.779990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.780010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.780019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.785235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.785255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.785263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.790437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.790463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.790471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.795569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.795589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.795598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.800939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.800966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.800974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.585 [2024-12-09 16:00:02.806275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.585 [2024-12-09 16:00:02.806296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.585 [2024-12-09 16:00:02.806304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.811550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.811570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.811578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.816860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.816881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.816889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.821983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.822004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.822012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.827166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.827186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.827194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.832289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.832309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.832317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.837537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.837558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.837566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.842746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.842766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.842773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.847948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.847970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.847978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.853085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.853105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.853112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.858295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.858316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.858324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.863487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.863508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.863516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.868716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.868737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.868744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.873954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.873974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.873982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.879188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.879209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.879228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.884380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.884401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.884408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.889626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.889647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.889654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.894856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.894876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.894884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.898209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.898236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.898244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.902771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.902791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.902799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.908039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.908058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.908066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.913469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.913489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.913497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.846 5720.00 IOPS, 715.00 MiB/s [2024-12-09T15:00:03.074Z] [2024-12-09 16:00:02.919758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.919777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.919785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.925183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.925203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.925211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.846 [2024-12-09 16:00:02.930418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.846 [2024-12-09 16:00:02.930437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.846 [2024-12-09 16:00:02.930445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:02.935603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:02.935624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:02.935631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:02.941883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:02.941904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:02.941912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:02.946912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:02.946931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:02.946939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:02.952543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:02.952562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:02.952570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:02.958291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:02.958311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:02.958318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:02.963633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:02.963654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:02.963662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:02.969770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:02.969792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:02.969803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:02.975376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:02.975397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:02.975405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:02.980858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:02.980879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:02.980888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:02.986370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:02.986391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:02.986399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:02.991915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:02.991936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:02.991943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:02.997311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:02.997331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:02.997338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:03.002747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:03.002767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:03.002775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:03.008103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:03.008124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:03.008131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:03.013423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:03.013443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:03.013451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:03.019124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:03.019148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:03.019156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:03.024723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:03.024744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:03.024752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:03.030233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:03.030254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:03.030261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:03.035681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:03.035701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:03.035708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:03.040971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:03.040992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:03.041000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:03.046333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:03.046354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:03.046361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:03.051640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:03.051660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:03.051668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:03.056889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:03.056909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:03.056917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:03.062269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:03.062290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:03.062298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.847 [2024-12-09 16:00:03.067796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:07.847 [2024-12-09 16:00:03.067817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.847 [2024-12-09 16:00:03.067825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.073266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.073288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.073296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.078542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.078564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.078572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.083889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.083911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.083918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.088956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.088977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.088985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.094082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.094104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.094112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.099174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.099194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.099202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.104338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.104359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.104367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.109508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.109529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.109540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.114673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.114693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.114701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.119907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.119927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.119935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.124875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.124896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.124904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.130244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.130264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.130272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.135804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.135825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.135833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.141351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.141373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.141380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.147047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.147068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.147076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.152208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.152236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.152244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.155550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.155569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.155577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.159786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.159807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.159815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.165190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.165209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.165224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.170345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.170365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.170372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.108 [2024-12-09 16:00:03.175600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.108 [2024-12-09 16:00:03.175620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.108 [2024-12-09 16:00:03.175628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.181110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.181130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.181138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.186865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.186886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.186894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.192670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.192690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.192698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.198440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.198461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.198472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.204033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.204053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.204061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.209234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.209254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.209261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.214632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.214652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.214660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.219329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.219349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.219357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.224521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.224542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.224549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.229692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.229712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.229720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.234810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.234829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.234837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.239954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.239974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.239982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.245061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.245086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.245093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.250255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.250277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.250285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.255463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.255484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.255493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.260705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.260725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.260733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.265644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.265665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.265674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.270335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.270355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.270363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.275233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.275254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.275262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.280113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.280134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.280142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.285260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.285280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.285288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.290389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.290409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.290417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.295449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.295469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.295476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.300520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.300541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.300549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.305781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.305802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.305810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.311081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.311101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.311109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.316509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.316530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.109 [2024-12-09 16:00:03.316537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.109 [2024-12-09 16:00:03.321814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.109 [2024-12-09 16:00:03.321834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.110 [2024-12-09 16:00:03.321842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.110 [2024-12-09 16:00:03.327103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.110 [2024-12-09 16:00:03.327123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.110 [2024-12-09 16:00:03.327130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.110 [2024-12-09 16:00:03.332400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.110 [2024-12-09 16:00:03.332422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.110 [2024-12-09 16:00:03.332433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.370 [2024-12-09 16:00:03.337805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.370 [2024-12-09 16:00:03.337826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.370 [2024-12-09 16:00:03.337833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.370 [2024-12-09 16:00:03.343001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.370 [2024-12-09 16:00:03.343020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.370 [2024-12-09 16:00:03.343027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.370 [2024-12-09 16:00:03.345928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.370 [2024-12-09 16:00:03.345948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.370 [2024-12-09 16:00:03.345956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.370 [2024-12-09 16:00:03.350901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.370 [2024-12-09 16:00:03.350921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.370 [2024-12-09 16:00:03.350929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.370 [2024-12-09 16:00:03.356056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.370 [2024-12-09 16:00:03.356076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.370 [2024-12-09 16:00:03.356083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.370 [2024-12-09 16:00:03.361069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.370 [2024-12-09 16:00:03.361089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.370 [2024-12-09 16:00:03.361097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.370 [2024-12-09 16:00:03.366317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.370 [2024-12-09 16:00:03.366337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.370 [2024-12-09 16:00:03.366344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.370 [2024-12-09 16:00:03.371654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.370 [2024-12-09 16:00:03.371674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.370 [2024-12-09 16:00:03.371682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.370 [2024-12-09 16:00:03.376893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.370 [2024-12-09 16:00:03.376917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.370 [2024-12-09 16:00:03.376925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.370 [2024-12-09 16:00:03.382095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.370 [2024-12-09 16:00:03.382115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.370 [2024-12-09 16:00:03.382123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.370 [2024-12-09 16:00:03.387466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.370 [2024-12-09 16:00:03.387487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.370 [2024-12-09 16:00:03.387495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.370 [2024-12-09 16:00:03.392857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.370 [2024-12-09 16:00:03.392877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.370 [2024-12-09 16:00:03.392885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.370 [2024-12-09 16:00:03.397960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.370 [2024-12-09 16:00:03.397980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.370 [2024-12-09 16:00:03.397989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.370 [2024-12-09 16:00:03.403322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.370 [2024-12-09 16:00:03.403342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.370 [2024-12-09 16:00:03.403350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.370 [2024-12-09 16:00:03.408584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.408605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.408613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.414008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.414028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.414036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.419354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.419375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.419386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.424607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.424628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.424635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.430246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.430267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.430274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.436261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.436280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.436288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.441498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.441518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.441526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.446580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.446601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.446609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.451868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.451889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.451897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.457168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.457187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.457195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.462456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.462478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.462485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.467972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.467995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.468003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.473337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.473357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.473365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.478379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.478399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.478407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.483657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.483677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.483686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.488780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.488800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.488808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.493923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.493944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.493951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.499015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.499035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.499042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.504187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.504208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.504215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.509423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.509442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.509450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.514597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.514617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.514625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.519957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.519978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.519986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.525116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.525136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.525144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.530308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.530328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.530336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.535395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.535415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.535423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.540582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.540600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.540608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.545641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.545661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.371 [2024-12-09 16:00:03.545669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.371 [2024-12-09 16:00:03.550731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.371 [2024-12-09 16:00:03.550751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.372 [2024-12-09 16:00:03.550759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.372 [2024-12-09 16:00:03.555828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.372 [2024-12-09 16:00:03.555849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.372 [2024-12-09 16:00:03.555862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.372 [2024-12-09 16:00:03.560963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.372 [2024-12-09 16:00:03.560983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.372 [2024-12-09 16:00:03.560991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.372 [2024-12-09 16:00:03.566120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.372 [2024-12-09 16:00:03.566142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.372 [2024-12-09 16:00:03.566149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.372 [2024-12-09 16:00:03.571283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.372 [2024-12-09 16:00:03.571304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.372 [2024-12-09 16:00:03.571311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.372 [2024-12-09 16:00:03.576714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.372 [2024-12-09 16:00:03.576735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.372 [2024-12-09 16:00:03.576742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.372 [2024-12-09 16:00:03.582276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.372 [2024-12-09 16:00:03.582297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.372 [2024-12-09 16:00:03.582305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.372 [2024-12-09 16:00:03.587720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.372 [2024-12-09 16:00:03.587742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.372 [2024-12-09 16:00:03.587749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.372 [2024-12-09 16:00:03.593273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.372 [2024-12-09 16:00:03.593294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.372 [2024-12-09 16:00:03.593303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.632 [2024-12-09 16:00:03.598613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.598634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.598642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.603990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.604016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.604024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.609283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.609305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.609313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.614585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.614606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.614614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.619857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.619877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.619885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.625295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.625315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.625324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.630709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.630730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.630738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.635566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.635587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.635595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.640578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.640600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.640609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.645811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.645831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.645838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.650896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.650916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.650924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.655949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.655970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.655978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.661073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.661093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.661101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.666237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.666257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.666265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.671401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.671421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.671429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.676667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.676687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.676695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.681882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.681903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.681911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.686884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.686906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.686914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.691447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.691468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.691480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.696607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.696629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.696638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.701569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.701591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.701599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.706708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.706730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.706738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.711727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.711747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.711755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.714544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.714563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.714571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.719675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.719695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.719702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.724749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.724771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.724779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.729941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.633 [2024-12-09 16:00:03.729962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.633 [2024-12-09 16:00:03.729970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.633 [2024-12-09 16:00:03.735113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.735134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.735141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.634 [2024-12-09 16:00:03.740305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.740326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.740334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.634 [2024-12-09 16:00:03.745570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.745590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.745598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.634 [2024-12-09 16:00:03.750659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.750680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.750688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.634 [2024-12-09 16:00:03.756541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.756561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.756569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.634 [2024-12-09 16:00:03.763150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.763170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.763178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.634 [2024-12-09 16:00:03.770481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.770503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.770511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.634 [2024-12-09 16:00:03.777663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.777684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.777692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.634 [2024-12-09 16:00:03.784912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.784933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.784944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.634 [2024-12-09 16:00:03.792494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.792515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.792523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.634 [2024-12-09 16:00:03.799192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.799213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.799227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.634 [2024-12-09 16:00:03.806405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.806426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.806434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.634 [2024-12-09 16:00:03.814553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.814575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.814584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.634 [2024-12-09 16:00:03.822670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.822691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.822699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.634 [2024-12-09 16:00:03.830549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.830570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.830579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.634 [2024-12-09 16:00:03.839450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.839472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.839480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.634 [2024-12-09 16:00:03.846562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.846585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.846593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.634 [2024-12-09 16:00:03.853035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.634 [2024-12-09 16:00:03.853060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.634 [2024-12-09 16:00:03.853069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.894 [2024-12-09 16:00:03.860407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.894 [2024-12-09 16:00:03.860430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.894 [2024-12-09 16:00:03.860439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.894 [2024-12-09 16:00:03.867231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.894 [2024-12-09 16:00:03.867254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.894 [2024-12-09 16:00:03.867261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.894 [2024-12-09 16:00:03.874497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.894 [2024-12-09 16:00:03.874518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.894 [2024-12-09 16:00:03.874526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.894 [2024-12-09 16:00:03.882603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.894 [2024-12-09 16:00:03.882625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.894 [2024-12-09 16:00:03.882634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.894 [2024-12-09 16:00:03.890174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.894 [2024-12-09 16:00:03.890196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.894 [2024-12-09 16:00:03.890204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.894 [2024-12-09 16:00:03.897847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.894 [2024-12-09 16:00:03.897869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.894 [2024-12-09 16:00:03.897877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.894 [2024-12-09 16:00:03.902631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.894 [2024-12-09 16:00:03.902653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.894 [2024-12-09 16:00:03.902660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.894 [2024-12-09 16:00:03.907980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.894 [2024-12-09 16:00:03.908002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.894 [2024-12-09 16:00:03.908009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.894 [2024-12-09 16:00:03.913753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.894 [2024-12-09 16:00:03.913774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.894 [2024-12-09 16:00:03.913782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.894 [2024-12-09 16:00:03.919250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13fa420) 00:27:08.894 [2024-12-09 16:00:03.919271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.894 [2024-12-09 16:00:03.919278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.894 5707.50 IOPS, 713.44 MiB/s 00:27:08.894 Latency(us) 00:27:08.894 [2024-12-09T15:00:04.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.894 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:08.894 nvme0n1 : 2.00 5707.38 713.42 0.00 0.00 2800.62 635.86 14667.58 00:27:08.894 [2024-12-09T15:00:04.122Z] =================================================================================================================== 00:27:08.894 [2024-12-09T15:00:04.122Z] Total : 5707.38 713.42 0.00 0.00 2800.62 635.86 14667.58 00:27:08.894 { 00:27:08.894 "results": [ 00:27:08.894 { 00:27:08.894 "job": "nvme0n1", 00:27:08.894 "core_mask": "0x2", 00:27:08.894 "workload": "randread", 00:27:08.894 "status": "finished", 00:27:08.894 "queue_depth": 16, 00:27:08.894 "io_size": 131072, 00:27:08.894 "runtime": 2.002846, 00:27:08.894 "iops": 5707.378400536038, 00:27:08.894 "mibps": 713.4223000670047, 00:27:08.894 "io_failed": 0, 00:27:08.894 "io_timeout": 0, 00:27:08.894 "avg_latency_us": 2800.6193323918665, 00:27:08.894 "min_latency_us": 635.8552380952381, 00:27:08.894 "max_latency_us": 14667.580952380953 00:27:08.894 } 00:27:08.894 ], 00:27:08.894 "core_count": 1 00:27:08.894 } 00:27:08.894 16:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:08.894 16:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:08.894 16:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:08.894 | .driver_specific 00:27:08.894 | .nvme_error 00:27:08.894 | .status_code 00:27:08.894 | .command_transient_transport_error' 00:27:08.894 16:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:09.154 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 369 > 0 )) 00:27:09.154 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2145881 00:27:09.154 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2145881 ']' 00:27:09.154 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2145881 00:27:09.154 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:09.154 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:09.154 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2145881 00:27:09.154 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:09.154 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:09.154 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2145881' 00:27:09.154 killing process with pid 2145881 00:27:09.155 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2145881 00:27:09.155 Received shutdown signal, test time was about 2.000000 seconds 00:27:09.155 00:27:09.155 Latency(us) 00:27:09.155 [2024-12-09T15:00:04.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.155 [2024-12-09T15:00:04.383Z] =================================================================================================================== 00:27:09.155 [2024-12-09T15:00:04.383Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:09.155 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2145881 00:27:09.155 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:09.155 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:09.155 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:09.155 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:09.155 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:09.155 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2146533 00:27:09.155 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2146533 /var/tmp/bperf.sock 00:27:09.155 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:09.155 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2146533 ']' 00:27:09.155 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:09.155 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.155 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:09.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:09.155 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.155 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.414 [2024-12-09 16:00:04.398280] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:27:09.414 [2024-12-09 16:00:04.398326] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146533 ] 00:27:09.414 [2024-12-09 16:00:04.471828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.414 [2024-12-09 16:00:04.512066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.414 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:09.414 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:09.414 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:09.414 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:09.672 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:09.672 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.672 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.672 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.672 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:09.672 16:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:09.932 nvme0n1 00:27:09.932 16:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:09.932 16:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.932 16:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.932 16:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.932 16:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:09.932 16:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:10.191 Running I/O for 2 seconds... 00:27:10.191 [2024-12-09 16:00:05.190571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef46d0 00:27:10.191 [2024-12-09 16:00:05.191534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.191563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.200667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef96f8 00:27:10.191 [2024-12-09 16:00:05.201787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.201808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.210280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee4de8 00:27:10.191 [2024-12-09 16:00:05.211616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.211636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.219640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef8618 00:27:10.191 [2024-12-09 16:00:05.221107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.221126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.225954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eebfd0 00:27:10.191 [2024-12-09 16:00:05.226592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.226611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.234440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef4f40 00:27:10.191 [2024-12-09 16:00:05.235054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.235072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.244399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee3d08 00:27:10.191 [2024-12-09 16:00:05.245059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.245077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.254385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef2510 00:27:10.191 [2024-12-09 16:00:05.255506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.255524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.261475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eed0b0 00:27:10.191 [2024-12-09 16:00:05.262110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.262128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.270840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ede470 00:27:10.191 [2024-12-09 16:00:05.271725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.271744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.281673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016edf118 00:27:10.191 [2024-12-09 16:00:05.282945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.282964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.290992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eee5c8 00:27:10.191 [2024-12-09 16:00:05.292512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.292530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.297499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eef6a8 00:27:10.191 [2024-12-09 16:00:05.298250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.298269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.308424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee6300 00:27:10.191 [2024-12-09 16:00:05.309716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.309735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.315629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef31b8 00:27:10.191 [2024-12-09 16:00:05.316394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.316422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.325391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eeee38 00:27:10.191 [2024-12-09 16:00:05.325935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.325954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.334456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef92c0 00:27:10.191 [2024-12-09 16:00:05.335370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.335388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.343335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eec840 00:27:10.191 [2024-12-09 16:00:05.344113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.344131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.352458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee6300 00:27:10.191 [2024-12-09 16:00:05.353152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.353171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.361185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efbcf0 00:27:10.191 [2024-12-09 16:00:05.362060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.362080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.370103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee5ec8 00:27:10.191 [2024-12-09 16:00:05.371122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.371141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.379145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef96f8 00:27:10.191 [2024-12-09 16:00:05.379717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.379735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.388980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efbcf0 00:27:10.191 [2024-12-09 16:00:05.390216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.390239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.397317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eff3c8 00:27:10.191 [2024-12-09 16:00:05.398123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.398142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.406248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efac10 00:27:10.191 [2024-12-09 16:00:05.407071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.191 [2024-12-09 16:00:05.407090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:10.191 [2024-12-09 16:00:05.415165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eff3c8 00:27:10.191 [2024-12-09 16:00:05.415973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.192 [2024-12-09 16:00:05.415992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:10.450 [2024-12-09 16:00:05.424729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016edf118 00:27:10.450 [2024-12-09 16:00:05.425848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.450 [2024-12-09 16:00:05.425866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:10.450 [2024-12-09 16:00:05.433082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef9b30 00:27:10.451 [2024-12-09 16:00:05.433837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.433855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.441808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef81e0 00:27:10.451 [2024-12-09 16:00:05.442608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.442627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.450938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee01f8 00:27:10.451 [2024-12-09 16:00:05.451726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.451744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.460214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef1868 00:27:10.451 [2024-12-09 16:00:05.460784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.460804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.470967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee5a90 00:27:10.451 [2024-12-09 16:00:05.472432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.472450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.477244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eea680 00:27:10.451 [2024-12-09 16:00:05.477912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.477930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.487839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee84c0 00:27:10.451 [2024-12-09 16:00:05.488947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.488966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.495105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee2c28 00:27:10.451 [2024-12-09 16:00:05.495747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.495765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.504006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eefae0 00:27:10.451 [2024-12-09 16:00:05.504658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.504676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.512884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee99d8 00:27:10.451 [2024-12-09 16:00:05.513530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.513548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.521813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee6fa8 00:27:10.451 [2024-12-09 16:00:05.522447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.522465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.530707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ede8a8 00:27:10.451 [2024-12-09 16:00:05.531368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.531386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.539852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef35f0 00:27:10.451 [2024-12-09 16:00:05.540274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.540292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.550066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eed0b0 00:27:10.451 [2024-12-09 16:00:05.551281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.551302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.558356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee9e10 00:27:10.451 [2024-12-09 16:00:05.559219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.559238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.567142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef92c0 00:27:10.451 [2024-12-09 16:00:05.568050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.568069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.576042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eec840 00:27:10.451 [2024-12-09 16:00:05.576904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.576921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.584915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efac10 00:27:10.451 [2024-12-09 16:00:05.585777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.585794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.593798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef4b08 00:27:10.451 [2024-12-09 16:00:05.594679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.594697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.602672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef5be8 00:27:10.451 [2024-12-09 16:00:05.603527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.603546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.611628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef7538 00:27:10.451 [2024-12-09 16:00:05.612508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.612526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.620518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016edf550 00:27:10.451 [2024-12-09 16:00:05.621383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.621400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.629401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef8a50 00:27:10.451 [2024-12-09 16:00:05.630253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.630271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.638288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016edf988 00:27:10.451 [2024-12-09 16:00:05.639140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.639158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.647166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef7100 00:27:10.451 [2024-12-09 16:00:05.648052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.648070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.657241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee1b48 00:27:10.451 [2024-12-09 16:00:05.658574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.658591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.665532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efa3a0 00:27:10.451 [2024-12-09 16:00:05.666529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.451 [2024-12-09 16:00:05.666548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:10.451 [2024-12-09 16:00:05.674357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef5378 00:27:10.452 [2024-12-09 16:00:05.675381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.452 [2024-12-09 16:00:05.675399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:10.711 [2024-12-09 16:00:05.683722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee88f8 00:27:10.711 [2024-12-09 16:00:05.685017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.711 [2024-12-09 16:00:05.685035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:10.711 [2024-12-09 16:00:05.693790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee7818 00:27:10.711 [2024-12-09 16:00:05.695288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.711 [2024-12-09 16:00:05.695306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:10.711 [2024-12-09 16:00:05.700058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee7818 00:27:10.711 [2024-12-09 16:00:05.700808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.711 [2024-12-09 16:00:05.700826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:10.711 [2024-12-09 16:00:05.710104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef1430 00:27:10.711 [2024-12-09 16:00:05.711017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.711 [2024-12-09 16:00:05.711036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:10.711 [2024-12-09 16:00:05.719347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee01f8 00:27:10.711 [2024-12-09 16:00:05.720358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.711 [2024-12-09 16:00:05.720376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:10.711 [2024-12-09 16:00:05.728791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef0788 00:27:10.711 [2024-12-09 16:00:05.729873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.711 [2024-12-09 16:00:05.729892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:10.711 [2024-12-09 16:00:05.738118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee0ea0 00:27:10.711 [2024-12-09 16:00:05.739316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.711 [2024-12-09 16:00:05.739334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:10.711 [2024-12-09 16:00:05.746607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee8088 00:27:10.711 [2024-12-09 16:00:05.747806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.711 [2024-12-09 16:00:05.747824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:10.711 [2024-12-09 16:00:05.754467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee3498 00:27:10.711 [2024-12-09 16:00:05.754990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.711 [2024-12-09 16:00:05.755008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:10.711 [2024-12-09 16:00:05.763489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eeb328 00:27:10.712 [2024-12-09 16:00:05.764369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.764387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.771975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee5a90 00:27:10.712 [2024-12-09 16:00:05.772820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.772837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.781886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef7538 00:27:10.712 [2024-12-09 16:00:05.782857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.782880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.790782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef5be8 00:27:10.712 [2024-12-09 16:00:05.791806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.791825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.799730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee4de8 00:27:10.712 [2024-12-09 16:00:05.800759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.800778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.808800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eeaef0 00:27:10.712 [2024-12-09 16:00:05.809794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.809812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.817698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef81e0 00:27:10.712 [2024-12-09 16:00:05.818702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.818720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.826667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eed0b0 00:27:10.712 [2024-12-09 16:00:05.827649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.827668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.835539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef8e88 00:27:10.712 [2024-12-09 16:00:05.836522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.836540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.844032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eec840 00:27:10.712 [2024-12-09 16:00:05.844944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.844963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.853027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eeb760 00:27:10.712 [2024-12-09 16:00:05.853979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.853998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.862567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef8e88 00:27:10.712 [2024-12-09 16:00:05.863479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.863498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.870766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eec840 00:27:10.712 [2024-12-09 16:00:05.871644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.871662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.880111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016edf118 00:27:10.712 [2024-12-09 16:00:05.881092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.881110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.888384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef5be8 00:27:10.712 [2024-12-09 16:00:05.889005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.889023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.897223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eed0b0 00:27:10.712 [2024-12-09 16:00:05.897844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.897862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.906281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef5378 00:27:10.712 [2024-12-09 16:00:05.907202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.907224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.916184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efd208 00:27:10.712 [2024-12-09 16:00:05.917252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.917270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.925499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eebfd0 00:27:10.712 [2024-12-09 16:00:05.926753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.926772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:10.712 [2024-12-09 16:00:05.933850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee4de8 00:27:10.712 [2024-12-09 16:00:05.934823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.712 [2024-12-09 16:00:05.934842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:10.972 [2024-12-09 16:00:05.942961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efd208 00:27:10.972 [2024-12-09 16:00:05.944000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.972 [2024-12-09 16:00:05.944019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:10.972 [2024-12-09 16:00:05.952488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee3d08 00:27:10.972 [2024-12-09 16:00:05.953732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.972 [2024-12-09 16:00:05.953750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:10.972 [2024-12-09 16:00:05.961878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee7c50 00:27:10.972 [2024-12-09 16:00:05.963244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.972 [2024-12-09 16:00:05.963261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:10.972 [2024-12-09 16:00:05.970270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efb8b8 00:27:10.972 [2024-12-09 16:00:05.971236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.972 [2024-12-09 16:00:05.971254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:10.972 [2024-12-09 16:00:05.979061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee12d8 00:27:10.972 [2024-12-09 16:00:05.980069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.972 [2024-12-09 16:00:05.980088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:10.972 [2024-12-09 16:00:05.987307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eeb760 00:27:10.972 [2024-12-09 16:00:05.988524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.972 [2024-12-09 16:00:05.988542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:10.972 [2024-12-09 16:00:05.995557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef1868 00:27:10.972 [2024-12-09 16:00:05.996188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.972 [2024-12-09 16:00:05.996206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:10.972 [2024-12-09 16:00:06.004965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ede470 00:27:10.972 [2024-12-09 16:00:06.005894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.972 [2024-12-09 16:00:06.005913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:10.972 [2024-12-09 16:00:06.014353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef6890 00:27:10.972 [2024-12-09 16:00:06.015255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.972 [2024-12-09 16:00:06.015273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:10.972 [2024-12-09 16:00:06.023668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef7538 00:27:10.972 [2024-12-09 16:00:06.024684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.972 [2024-12-09 16:00:06.024703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:10.972 [2024-12-09 16:00:06.030732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eea248 00:27:10.972 [2024-12-09 16:00:06.031268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.972 [2024-12-09 16:00:06.031287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:10.972 [2024-12-09 16:00:06.040016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ede038 00:27:10.972 [2024-12-09 16:00:06.040685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.972 [2024-12-09 16:00:06.040704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:10.972 [2024-12-09 16:00:06.049489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ede038 00:27:10.972 [2024-12-09 16:00:06.050230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.972 [2024-12-09 16:00:06.050249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:10.972 [2024-12-09 16:00:06.057782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee38d0 00:27:10.972 [2024-12-09 16:00:06.058473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.972 [2024-12-09 16:00:06.058491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:10.972 [2024-12-09 16:00:06.066847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eebfd0 00:27:10.973 [2024-12-09 16:00:06.067498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.973 [2024-12-09 16:00:06.067516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:10.973 [2024-12-09 16:00:06.075315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee1f80 00:27:10.973 [2024-12-09 16:00:06.075975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.973 [2024-12-09 16:00:06.075992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:10.973 [2024-12-09 16:00:06.086423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef46d0 00:27:10.973 [2024-12-09 16:00:06.087635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.973 [2024-12-09 16:00:06.087653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:10.973 [2024-12-09 16:00:06.094800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef3a28 00:27:10.973 [2024-12-09 16:00:06.095764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.973 [2024-12-09 16:00:06.095786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:10.973 [2024-12-09 16:00:06.103741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ede470 00:27:10.973 [2024-12-09 16:00:06.104582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.973 [2024-12-09 16:00:06.104600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:10.973 [2024-12-09 16:00:06.113097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef4f40 00:27:10.973 [2024-12-09 16:00:06.114132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.973 [2024-12-09 16:00:06.114150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:10.973 [2024-12-09 16:00:06.122938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef8a50 00:27:10.973 [2024-12-09 16:00:06.124305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.973 [2024-12-09 16:00:06.124322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:10.973 [2024-12-09 16:00:06.131629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee8088 00:27:10.973 [2024-12-09 16:00:06.132990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.973 [2024-12-09 16:00:06.133007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:10.973 [2024-12-09 16:00:06.138175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef6020 00:27:10.973 [2024-12-09 16:00:06.138825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.973 [2024-12-09 16:00:06.138843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:10.973 [2024-12-09 16:00:06.149117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eed4e8 00:27:10.973 [2024-12-09 16:00:06.150150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.973 [2024-12-09 16:00:06.150168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:10.973 [2024-12-09 16:00:06.157981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef0bc0 00:27:10.973 [2024-12-09 16:00:06.159098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.973 [2024-12-09 16:00:06.159116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:10.973 [2024-12-09 16:00:06.166995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ede8a8 00:27:10.973 [2024-12-09 16:00:06.168129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.973 [2024-12-09 16:00:06.168147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:10.973 [2024-12-09 16:00:06.175480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efac10 00:27:10.973 [2024-12-09 16:00:06.176515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.973 [2024-12-09 16:00:06.176533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:10.973 28255.00 IOPS, 110.37 MiB/s [2024-12-09T15:00:06.201Z] [2024-12-09 16:00:06.185824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee1710 00:27:10.973 [2024-12-09 16:00:06.186542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.973 [2024-12-09 16:00:06.186562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:10.973 [2024-12-09 16:00:06.194565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef8618 00:27:10.973 [2024-12-09 16:00:06.195575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.973 [2024-12-09 16:00:06.195594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.203770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef46d0 00:27:11.233 [2024-12-09 16:00:06.204677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.204696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.213491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eecc78 00:27:11.233 [2024-12-09 16:00:06.214585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.214604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.222947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eef270 00:27:11.233 [2024-12-09 16:00:06.224111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.224129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.230029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efc998 00:27:11.233 [2024-12-09 16:00:06.230752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.230770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.240855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eeaef0 00:27:11.233 [2024-12-09 16:00:06.242028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.242046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.247941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee49b0 00:27:11.233 [2024-12-09 16:00:06.248638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.248655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.259456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee8d30 00:27:11.233 [2024-12-09 16:00:06.260934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.260952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.265912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eecc78 00:27:11.233 [2024-12-09 16:00:06.266720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.266738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.276868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016edf988 00:27:11.233 [2024-12-09 16:00:06.278148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.278165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.285914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efef90 00:27:11.233 [2024-12-09 16:00:06.287206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.287227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.292541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efc128 00:27:11.233 [2024-12-09 16:00:06.293272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.293290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.303680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee5ec8 00:27:11.233 [2024-12-09 16:00:06.304876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.304894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.312222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efdeb0 00:27:11.233 [2024-12-09 16:00:06.313136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.313154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.320751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee3498 00:27:11.233 [2024-12-09 16:00:06.321469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.321487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.329308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eef6a8 00:27:11.233 [2024-12-09 16:00:06.330014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.330036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.338618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eebfd0 00:27:11.233 [2024-12-09 16:00:06.339487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.339506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.348007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee73e0 00:27:11.233 [2024-12-09 16:00:06.348968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.348986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.358993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efb480 00:27:11.233 [2024-12-09 16:00:06.360454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.360472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.365435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efb480 00:27:11.233 [2024-12-09 16:00:06.366184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.366202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.376231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef2948 00:27:11.233 [2024-12-09 16:00:06.377330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.233 [2024-12-09 16:00:06.377348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.233 [2024-12-09 16:00:06.384652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eea680 00:27:11.233 [2024-12-09 16:00:06.385765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.234 [2024-12-09 16:00:06.385782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:11.234 [2024-12-09 16:00:06.393704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eedd58 00:27:11.234 [2024-12-09 16:00:06.394370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.234 [2024-12-09 16:00:06.394389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.234 [2024-12-09 16:00:06.402367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eeee38 00:27:11.234 [2024-12-09 16:00:06.403348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.234 [2024-12-09 16:00:06.403367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.234 [2024-12-09 16:00:06.411372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef92c0 00:27:11.234 [2024-12-09 16:00:06.412378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.234 [2024-12-09 16:00:06.412396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:11.234 [2024-12-09 16:00:06.420458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee7818 00:27:11.234 [2024-12-09 16:00:06.421434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.234 [2024-12-09 16:00:06.421452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:11.234 [2024-12-09 16:00:06.429019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee9e10 00:27:11.234 [2024-12-09 16:00:06.429699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.234 [2024-12-09 16:00:06.429718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.234 [2024-12-09 16:00:06.437150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef1ca0 00:27:11.234 [2024-12-09 16:00:06.437818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.234 [2024-12-09 16:00:06.437836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.234 [2024-12-09 16:00:06.445997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee1710 00:27:11.234 [2024-12-09 16:00:06.446694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.234 [2024-12-09 16:00:06.446712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:11.234 [2024-12-09 16:00:06.456852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef1868 00:27:11.234 [2024-12-09 16:00:06.457809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.234 [2024-12-09 16:00:06.457828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.493 [2024-12-09 16:00:06.466517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efb048 00:27:11.493 [2024-12-09 16:00:06.467714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.493 [2024-12-09 16:00:06.467735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.493 [2024-12-09 16:00:06.475616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eeaab8 00:27:11.493 [2024-12-09 16:00:06.476560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.493 [2024-12-09 16:00:06.476579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.483907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef1868 00:27:11.494 [2024-12-09 16:00:06.484814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.484832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.492957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eeb760 00:27:11.494 [2024-12-09 16:00:06.493901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.493919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.501532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee7818 00:27:11.494 [2024-12-09 16:00:06.502201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.502239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.509634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee1b48 00:27:11.494 [2024-12-09 16:00:06.510302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.510320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.520013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee5a90 00:27:11.494 [2024-12-09 16:00:06.520926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.520944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.528508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef0ff8 00:27:11.494 [2024-12-09 16:00:06.529351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.529369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.537971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef35f0 00:27:11.494 [2024-12-09 16:00:06.538944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.538963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.547077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eeea00 00:27:11.494 [2024-12-09 16:00:06.548001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.548019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.555954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef0bc0 00:27:11.494 [2024-12-09 16:00:06.556983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.557002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.565737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016edf118 00:27:11.494 [2024-12-09 16:00:06.566877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.566901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.576163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efa7d8 00:27:11.494 [2024-12-09 16:00:06.577251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.577272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.588094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eed0b0 00:27:11.494 [2024-12-09 16:00:06.589736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.589756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.595294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef4f40 00:27:11.494 [2024-12-09 16:00:06.595969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.595989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.606106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef7970 00:27:11.494 [2024-12-09 16:00:06.606960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.606981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.616190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee88f8 00:27:11.494 [2024-12-09 16:00:06.617233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.617252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.627175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee88f8 00:27:11.494 [2024-12-09 16:00:06.628627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.628646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.633668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee38d0 00:27:11.494 [2024-12-09 16:00:06.634327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.634346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.643233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016edf550 00:27:11.494 [2024-12-09 16:00:06.644054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.644073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.652296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ede8a8 00:27:11.494 [2024-12-09 16:00:06.653110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.653129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.661002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee49b0 00:27:11.494 [2024-12-09 16:00:06.661466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.661483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.672279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee4140 00:27:11.494 [2024-12-09 16:00:06.673783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.673802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.678712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef7538 00:27:11.494 [2024-12-09 16:00:06.679433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.679451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.688603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef7538 00:27:11.494 [2024-12-09 16:00:06.689377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.689396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.697715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eec408 00:27:11.494 [2024-12-09 16:00:06.698446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.698464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.706970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eeb328 00:27:11.494 [2024-12-09 16:00:06.707691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.707710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.494 [2024-12-09 16:00:06.715354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016edf988 00:27:11.494 [2024-12-09 16:00:06.716063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.494 [2024-12-09 16:00:06.716081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.754 [2024-12-09 16:00:06.726254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016edf988 00:27:11.754 [2024-12-09 16:00:06.727454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.754 [2024-12-09 16:00:06.727473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:11.754 [2024-12-09 16:00:06.735671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef0ff8 00:27:11.754 [2024-12-09 16:00:06.736959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.754 [2024-12-09 16:00:06.736977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:11.754 [2024-12-09 16:00:06.742171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eef6a8 00:27:11.754 [2024-12-09 16:00:06.742751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.754 [2024-12-09 16:00:06.742769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:11.754 [2024-12-09 16:00:06.752584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efbcf0 00:27:11.754 [2024-12-09 16:00:06.753494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.754 [2024-12-09 16:00:06.753513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:11.754 [2024-12-09 16:00:06.761451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efb8b8 00:27:11.754 [2024-12-09 16:00:06.762572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.754 [2024-12-09 16:00:06.762590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.754 [2024-12-09 16:00:06.769748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee3498 00:27:11.754 [2024-12-09 16:00:06.770412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.754 [2024-12-09 16:00:06.770431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.754 [2024-12-09 16:00:06.778773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee5a90 00:27:11.754 [2024-12-09 16:00:06.779320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.754 [2024-12-09 16:00:06.779339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.754 [2024-12-09 16:00:06.788095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee1f80 00:27:11.754 [2024-12-09 16:00:06.788764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.754 [2024-12-09 16:00:06.788783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.754 [2024-12-09 16:00:06.796493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eed0b0 00:27:11.754 [2024-12-09 16:00:06.797067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.754 [2024-12-09 16:00:06.797084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:11.754 [2024-12-09 16:00:06.805677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee2c28 00:27:11.754 [2024-12-09 16:00:06.806457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.754 [2024-12-09 16:00:06.806478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:11.754 [2024-12-09 16:00:06.813880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef7538 00:27:11.754 [2024-12-09 16:00:06.814755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.814772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.822758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee3498 00:27:11.755 [2024-12-09 16:00:06.823306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.823325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.833349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efd640 00:27:11.755 [2024-12-09 16:00:06.834362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.834381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.841504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efd640 00:27:11.755 [2024-12-09 16:00:06.842577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.842594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.849906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efbcf0 00:27:11.755 [2024-12-09 16:00:06.850724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.850742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.858857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efd640 00:27:11.755 [2024-12-09 16:00:06.859630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.859647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.867286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef4f40 00:27:11.755 [2024-12-09 16:00:06.868025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.868043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.876612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee0ea0 00:27:11.755 [2024-12-09 16:00:06.877486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.877504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.887585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee5a90 00:27:11.755 [2024-12-09 16:00:06.888951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.888972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.894046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eddc00 00:27:11.755 [2024-12-09 16:00:06.894689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.894708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.903966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eed0b0 00:27:11.755 [2024-12-09 16:00:06.904663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.904681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.913328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef4f40 00:27:11.755 [2024-12-09 16:00:06.914358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.914376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.922644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef2d80 00:27:11.755 [2024-12-09 16:00:06.923766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.923785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.930900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efef90 00:27:11.755 [2024-12-09 16:00:06.931589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.931608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.940256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee3060 00:27:11.755 [2024-12-09 16:00:06.941260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.941278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.948503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef1430 00:27:11.755 [2024-12-09 16:00:06.949092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.949111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.959722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee6738 00:27:11.755 [2024-12-09 16:00:06.961113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.961131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.966231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee01f8 00:27:11.755 [2024-12-09 16:00:06.966913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.966931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:11.755 [2024-12-09 16:00:06.977064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef9f68 00:27:11.755 [2024-12-09 16:00:06.978132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.755 [2024-12-09 16:00:06.978151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:06.985779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef31b8 00:27:12.015 [2024-12-09 16:00:06.986837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:06.986855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:06.995119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef92c0 00:27:12.015 [2024-12-09 16:00:06.996268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:06.996287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:07.004432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee4578 00:27:12.015 [2024-12-09 16:00:07.005688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:07.005708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:07.013726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee0630 00:27:12.015 [2024-12-09 16:00:07.015088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:07.015106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:07.023105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee6300 00:27:12.015 [2024-12-09 16:00:07.024616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:07.024634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:07.029398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efa3a0 00:27:12.015 [2024-12-09 16:00:07.030105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:07.030124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:07.037850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eed4e8 00:27:12.015 [2024-12-09 16:00:07.038525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:07.038543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:07.046888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef6cc8 00:27:12.015 [2024-12-09 16:00:07.047561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:07.047579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:07.056188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eeb760 00:27:12.015 [2024-12-09 16:00:07.056771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:07.056789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:07.066279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016eec408 00:27:12.015 [2024-12-09 16:00:07.067462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:07.067479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:07.075500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef20d8 00:27:12.015 [2024-12-09 16:00:07.076222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:07.076240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:07.084434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef0ff8 00:27:12.015 [2024-12-09 16:00:07.085392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:07.085409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:07.094496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef0ff8 00:27:12.015 [2024-12-09 16:00:07.096006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:07.096024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:07.100755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef20d8 00:27:12.015 [2024-12-09 16:00:07.101442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:07.101460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:07.110065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efda78 00:27:12.015 [2024-12-09 16:00:07.110876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:07.110895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:07.118533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee84c0 00:27:12.015 [2024-12-09 16:00:07.119336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:07.119358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:07.128649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee9e10 00:27:12.015 [2024-12-09 16:00:07.129516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:07.129537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:07.138082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef4b08 00:27:12.015 [2024-12-09 16:00:07.139285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.015 [2024-12-09 16:00:07.139304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:12.015 [2024-12-09 16:00:07.146364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee23b8 00:27:12.016 [2024-12-09 16:00:07.147408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.016 [2024-12-09 16:00:07.147427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:12.016 [2024-12-09 16:00:07.155279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ef46d0 00:27:12.016 [2024-12-09 16:00:07.156021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.016 [2024-12-09 16:00:07.156040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:12.016 [2024-12-09 16:00:07.164268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016efda78 00:27:12.016 [2024-12-09 16:00:07.165006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.016 [2024-12-09 16:00:07.165024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:12.016 [2024-12-09 16:00:07.173517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5216c0) with pdu=0x200016ee88f8 00:27:12.016 [2024-12-09 16:00:07.174574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.016 [2024-12-09 16:00:07.174592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:12.016 28212.50 IOPS, 110.21 MiB/s 00:27:12.016 Latency(us) 00:27:12.016 [2024-12-09T15:00:07.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.016 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:12.016 nvme0n1 : 2.00 28246.73 110.34 0.00 0.00 4527.48 2231.34 13544.11 00:27:12.016 [2024-12-09T15:00:07.244Z] =================================================================================================================== 00:27:12.016 [2024-12-09T15:00:07.244Z] Total : 28246.73 110.34 0.00 0.00 4527.48 2231.34 13544.11 00:27:12.016 { 00:27:12.016 "results": [ 00:27:12.016 { 00:27:12.016 "job": "nvme0n1", 00:27:12.016 "core_mask": "0x2", 00:27:12.016 "workload": "randwrite", 00:27:12.016 "status": "finished", 00:27:12.016 "queue_depth": 128, 00:27:12.016 "io_size": 4096, 00:27:12.016 "runtime": 2.002108, 00:27:12.016 "iops": 28246.727948742027, 00:27:12.016 "mibps": 110.33878104977354, 00:27:12.016 "io_failed": 0, 00:27:12.016 "io_timeout": 0, 00:27:12.016 "avg_latency_us": 4527.483285413682, 00:27:12.016 "min_latency_us": 2231.344761904762, 00:27:12.016 "max_latency_us": 13544.106666666667 00:27:12.016 } 00:27:12.016 ], 00:27:12.016 "core_count": 1 00:27:12.016 } 00:27:12.016 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:12.016 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:12.016 | .driver_specific 00:27:12.016 | .nvme_error 00:27:12.016 | .status_code 00:27:12.016 | .command_transient_transport_error' 00:27:12.016 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:12.016 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:12.275 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 221 > 0 )) 00:27:12.275 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2146533 00:27:12.275 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2146533 ']' 00:27:12.275 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2146533 00:27:12.275 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:12.275 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:12.275 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2146533 00:27:12.275 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:12.275 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:12.275 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2146533' 00:27:12.275 killing process with pid 2146533 00:27:12.275 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2146533 00:27:12.275 Received shutdown signal, test time was about 2.000000 seconds 00:27:12.275 00:27:12.275 Latency(us) 00:27:12.275 [2024-12-09T15:00:07.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.275 [2024-12-09T15:00:07.503Z] =================================================================================================================== 00:27:12.275 [2024-12-09T15:00:07.503Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:12.275 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2146533 00:27:12.534 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:12.534 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:12.534 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:12.534 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:12.534 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:12.534 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2147376 00:27:12.534 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2147376 /var/tmp/bperf.sock 00:27:12.534 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:12.534 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2147376 ']' 00:27:12.534 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:12.534 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:12.534 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:12.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:12.534 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:12.534 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:12.535 [2024-12-09 16:00:07.672589] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:27:12.535 [2024-12-09 16:00:07.672637] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2147376 ] 00:27:12.535 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:12.535 Zero copy mechanism will not be used. 00:27:12.535 [2024-12-09 16:00:07.746722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.794 [2024-12-09 16:00:07.787768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.794 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:12.794 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:12.794 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:12.794 16:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:13.053 16:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:13.053 16:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.053 16:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:13.053 16:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.053 16:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:13.053 16:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:13.312 nvme0n1 00:27:13.312 16:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:13.312 16:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.312 16:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:13.312 16:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.312 16:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:13.312 16:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:13.312 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:13.312 Zero copy mechanism will not be used. 00:27:13.312 Running I/O for 2 seconds... 00:27:13.312 [2024-12-09 16:00:08.528103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.312 [2024-12-09 16:00:08.528178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.312 [2024-12-09 16:00:08.528204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.312 [2024-12-09 16:00:08.532621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.312 [2024-12-09 16:00:08.532693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.312 [2024-12-09 16:00:08.532716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.312 [2024-12-09 16:00:08.536927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.312 [2024-12-09 16:00:08.536997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.312 [2024-12-09 16:00:08.537017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.541162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.541234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.541253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.545421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.545490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.545509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.549555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.549616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.549635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.553662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.553720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.553738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.557740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.557797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.557815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.561842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.561894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.561913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.565934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.565991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.566009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.570009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.570077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.570095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.574102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.574167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.574184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.578153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.578208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.578233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.582203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.582267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.582285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.586278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.586331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.586349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.590319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.590375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.590394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.594355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.594417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.594435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.598367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.598424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.598442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.602414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.602470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.602492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.606483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.606536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.606555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.610478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.610533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.610551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.614494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.614550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.614568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.618500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.618559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.573 [2024-12-09 16:00:08.618578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.573 [2024-12-09 16:00:08.622536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.573 [2024-12-09 16:00:08.622604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.622622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.626577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.626630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.626648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.630597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.630661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.630679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.634625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.634695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.634713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.638662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.638724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.638742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.642707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.642772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.642791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.646744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.646816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.646834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.650802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.650867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.650886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.654837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.654893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.654911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.658857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.658923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.658941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.662925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.662981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.662999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.666933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.666985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.667003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.671009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.671120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.671139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.675792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.675973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.675992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.681849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.682045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.682066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.687357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.687439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.687459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.692235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.692341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.692360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.697201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.697307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.697326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.702341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.702395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.702414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.706998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.707086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.707105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.712701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.712862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.712881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.718782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.718858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.718882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.724554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.724728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.724747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.731160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.731310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.731330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.737671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.737837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.737856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.744008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.744169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.744189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.750431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.750514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.750533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.755294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.574 [2024-12-09 16:00:08.755374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.574 [2024-12-09 16:00:08.755392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.574 [2024-12-09 16:00:08.760417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.575 [2024-12-09 16:00:08.760507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.575 [2024-12-09 16:00:08.760524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.575 [2024-12-09 16:00:08.765379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.575 [2024-12-09 16:00:08.765485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.575 [2024-12-09 16:00:08.765504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.575 [2024-12-09 16:00:08.770246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.575 [2024-12-09 16:00:08.770338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.575 [2024-12-09 16:00:08.770358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.575 [2024-12-09 16:00:08.775281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.575 [2024-12-09 16:00:08.775359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.575 [2024-12-09 16:00:08.775378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.575 [2024-12-09 16:00:08.780137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.575 [2024-12-09 16:00:08.780314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.575 [2024-12-09 16:00:08.780333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.575 [2024-12-09 16:00:08.785173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.575 [2024-12-09 16:00:08.785278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.575 [2024-12-09 16:00:08.785297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.575 [2024-12-09 16:00:08.790338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.575 [2024-12-09 16:00:08.790462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.575 [2024-12-09 16:00:08.790481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.575 [2024-12-09 16:00:08.795372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.575 [2024-12-09 16:00:08.795528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.575 [2024-12-09 16:00:08.795547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.835 [2024-12-09 16:00:08.800487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.835 [2024-12-09 16:00:08.800585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.835 [2024-12-09 16:00:08.800604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.835 [2024-12-09 16:00:08.805828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.835 [2024-12-09 16:00:08.805992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.835 [2024-12-09 16:00:08.806011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.835 [2024-12-09 16:00:08.810846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.835 [2024-12-09 16:00:08.811011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.835 [2024-12-09 16:00:08.811030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.835 [2024-12-09 16:00:08.815890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.835 [2024-12-09 16:00:08.816050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.835 [2024-12-09 16:00:08.816069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.835 [2024-12-09 16:00:08.820802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.835 [2024-12-09 16:00:08.820909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.835 [2024-12-09 16:00:08.820928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.835 [2024-12-09 16:00:08.825933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.835 [2024-12-09 16:00:08.826106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.835 [2024-12-09 16:00:08.826125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.835 [2024-12-09 16:00:08.830922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.835 [2024-12-09 16:00:08.831017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.835 [2024-12-09 16:00:08.831036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.835867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.835967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.835985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.841388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.841471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.841490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.846924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.847174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.847194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.852184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.852461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.852481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.856848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.857099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.857121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.861735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.861986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.862005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.866622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.866881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.866899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.871662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.871906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.871925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.877614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.877882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.877901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.883439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.883681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.883700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.889736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.890002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.890021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.895998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.896233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.896252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.902380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.902626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.902645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.908089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.908382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.908401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.914643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.914880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.914899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.920908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.921018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.921037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.925945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.926173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.926193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.930539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.930773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.930792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.934994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.935268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.935287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.939483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.939732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.939750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.943604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.943855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.943873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.947781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.948040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.948059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.952131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.952381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.952399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.956595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.956834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.956853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.960898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.961143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.961161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.964965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.965214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.965240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.969240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.969483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.836 [2024-12-09 16:00:08.969501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.836 [2024-12-09 16:00:08.973633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.836 [2024-12-09 16:00:08.973879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:08.973898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:08.978627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:08.978868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:08.978887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:08.983446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:08.983683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:08.983702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:08.988045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:08.988294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:08.988317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:08.992407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:08.992654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:08.992673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:08.996709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:08.996960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:08.996979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:09.000901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:09.001143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:09.001162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:09.005694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:09.005938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:09.005957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:09.010100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:09.010346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:09.010365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:09.014229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:09.014470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:09.014489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:09.018284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:09.018527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:09.018546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:09.022391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:09.022633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:09.022652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:09.026429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:09.026679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:09.026698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:09.030493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:09.030738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:09.030773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:09.034602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:09.034857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:09.034877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:09.038689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:09.038934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:09.038953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:09.042786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:09.043034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:09.043053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:09.046835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:09.047082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:09.047101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:09.050901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:09.051147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:09.051166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:09.054975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:09.055231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:09.055250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.837 [2024-12-09 16:00:09.059084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:13.837 [2024-12-09 16:00:09.059328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.837 [2024-12-09 16:00:09.059348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.098 [2024-12-09 16:00:09.063180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.098 [2024-12-09 16:00:09.063442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.098 [2024-12-09 16:00:09.063461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.098 [2024-12-09 16:00:09.067295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.098 [2024-12-09 16:00:09.067541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.098 [2024-12-09 16:00:09.067560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.098 [2024-12-09 16:00:09.071364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.098 [2024-12-09 16:00:09.071610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.098 [2024-12-09 16:00:09.071629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.098 [2024-12-09 16:00:09.075402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.098 [2024-12-09 16:00:09.075647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.098 [2024-12-09 16:00:09.075666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.098 [2024-12-09 16:00:09.079486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.098 [2024-12-09 16:00:09.079739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.098 [2024-12-09 16:00:09.079758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.098 [2024-12-09 16:00:09.083521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.098 [2024-12-09 16:00:09.083769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.098 [2024-12-09 16:00:09.083788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.098 [2024-12-09 16:00:09.087591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.098 [2024-12-09 16:00:09.087836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.098 [2024-12-09 16:00:09.087855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.098 [2024-12-09 16:00:09.091607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.098 [2024-12-09 16:00:09.091855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.098 [2024-12-09 16:00:09.091874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.098 [2024-12-09 16:00:09.095665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.098 [2024-12-09 16:00:09.095911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.098 [2024-12-09 16:00:09.095933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.098 [2024-12-09 16:00:09.099700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.098 [2024-12-09 16:00:09.099935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.098 [2024-12-09 16:00:09.099954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.098 [2024-12-09 16:00:09.103726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.098 [2024-12-09 16:00:09.103967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.098 [2024-12-09 16:00:09.103987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.098 [2024-12-09 16:00:09.107774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.098 [2024-12-09 16:00:09.108037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.098 [2024-12-09 16:00:09.108056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.098 [2024-12-09 16:00:09.111844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.098 [2024-12-09 16:00:09.112098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.098 [2024-12-09 16:00:09.112117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.098 [2024-12-09 16:00:09.115922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.098 [2024-12-09 16:00:09.116162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.098 [2024-12-09 16:00:09.116181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.098 [2024-12-09 16:00:09.119924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.120162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.120182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.124010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.124252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.124270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.127999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.128256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.128274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.131994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.132247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.132269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.136021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.136263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.136282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.140021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.140274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.140293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.144045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.144287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.144305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.147977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.148246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.148265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.152029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.152276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.152295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.156046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.156300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.156319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.160036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.160280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.160299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.164066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.164312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.164331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.168054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.168305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.168324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.172042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.172299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.172318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.176052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.176312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.176331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.180015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.180262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.180281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.184004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.184254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.184274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.188009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.188261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.188280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.192041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.192287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.192307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.196014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.196257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.196276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.200195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.200438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.200458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.205041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.205387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.205406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.210812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.211171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.211190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.215365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.215643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.215663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.220052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.220306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.220326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.224133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.224385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.224404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.228241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.228485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.228503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.232342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.099 [2024-12-09 16:00:09.232584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.099 [2024-12-09 16:00:09.232604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.099 [2024-12-09 16:00:09.236428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.236677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.236696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.100 [2024-12-09 16:00:09.240684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.240932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.240954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.100 [2024-12-09 16:00:09.244727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.244980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.244999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.100 [2024-12-09 16:00:09.248809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.249057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.249076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.100 [2024-12-09 16:00:09.253117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.253365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.253385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.100 [2024-12-09 16:00:09.258601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.258920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.258940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.100 [2024-12-09 16:00:09.264075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.264309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.264328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.100 [2024-12-09 16:00:09.268691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.268926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.268945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.100 [2024-12-09 16:00:09.273676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.273956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.273976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.100 [2024-12-09 16:00:09.277964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.278250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.278270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.100 [2024-12-09 16:00:09.282773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.283007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.283042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.100 [2024-12-09 16:00:09.288531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.288867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.288887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.100 [2024-12-09 16:00:09.294686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.294995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.295014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.100 [2024-12-09 16:00:09.300610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.300928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.300950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.100 [2024-12-09 16:00:09.306737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.307035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.307056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.100 [2024-12-09 16:00:09.312641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.312994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.313013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.100 [2024-12-09 16:00:09.318809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.319047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.319067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.100 [2024-12-09 16:00:09.323879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.100 [2024-12-09 16:00:09.324111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.100 [2024-12-09 16:00:09.324130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.360 [2024-12-09 16:00:09.330198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.360 [2024-12-09 16:00:09.330449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.360 [2024-12-09 16:00:09.330470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.360 [2024-12-09 16:00:09.336234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.360 [2024-12-09 16:00:09.336529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.360 [2024-12-09 16:00:09.336549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.360 [2024-12-09 16:00:09.343548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.343778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.343797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.349530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.349768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.349787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.356355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.356537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.356556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.362357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.362575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.362594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.368827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.369080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.369099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.374451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.374637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.374656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.378945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.379157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.379177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.383487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.383706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.383728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.388122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.388336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.388355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.392706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.392901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.392920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.397361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.397564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.397583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.402160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.402367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.402386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.407081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.407288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.407307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.411595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.411792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.411810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.416123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.416334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.416353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.421127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.421346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.421366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.425967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.426170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.426189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.431062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.431289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.431308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.435830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.436038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.436056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.440132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.440373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.440393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.444316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.444540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.444559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.448254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.448447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.448466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.452325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.452528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.452547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.456330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.456527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.456546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.460362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.460562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.460581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.464409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.464597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.464615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.468498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.468686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.468705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.361 [2024-12-09 16:00:09.472528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.361 [2024-12-09 16:00:09.472710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.361 [2024-12-09 16:00:09.472729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.476527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.476721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.476741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.480602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.480783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.480802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.484603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.484791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.484810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.488687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.488893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.488911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.492675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.492856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.492875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.497344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.497537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.497559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.502933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.503257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.503276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.508911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.509039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.509059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.515423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.515637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.515656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.520367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.520564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.520584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.524552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.524743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.524763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.528552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.528738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.528755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.362 6671.00 IOPS, 833.88 MiB/s [2024-12-09T15:00:09.590Z] [2024-12-09 16:00:09.533468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.533653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.533672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.538309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.538537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.538556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.544678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.544866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.544886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.551159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.551367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.551386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.557596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.557788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.557807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.565066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.565265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.565284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.571877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.572114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.572134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.578738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.578930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.578950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.362 [2024-12-09 16:00:09.585062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.362 [2024-12-09 16:00:09.585278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.362 [2024-12-09 16:00:09.585298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.591772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.592117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.592137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.598818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.599084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.599104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.605329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.605594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.605613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.611910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.612187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.612207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.618736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.618943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.618963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.625492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.625808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.625827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.632053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.632246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.632266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.637675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.637845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.637864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.643181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.643425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.643445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.648498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.648676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.648696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.654567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.654752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.654778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.660815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.661092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.661111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.667056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.667307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.667326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.673830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.674146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.674165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.679670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.679934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.679954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.685939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.686164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.686184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.691667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.691852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.691871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.697793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.697981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.698001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.704296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.704476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.704496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.710528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.710733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.710754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.716418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.716698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.716718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.722880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.723089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.723110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.729272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.729552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.729572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.735090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.735413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.735433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.741768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.742049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.742070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.747068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.747278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.747297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.751391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.623 [2024-12-09 16:00:09.751570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.623 [2024-12-09 16:00:09.751590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.623 [2024-12-09 16:00:09.755338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.755517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.755537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.759311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.759493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.759513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.763186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.763383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.763407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.767103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.767288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.767307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.770991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.771173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.771192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.774831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.775013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.775033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.778778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.778956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.778976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.782852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.783033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.783052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.787398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.787697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.787717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.792172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.792360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.792386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.796256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.796437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.796457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.801124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.801327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.801347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.804969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.805150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.805169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.808815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.808997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.809016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.812567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.812748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.812768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.816292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.816475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.816495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.819971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.820151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.820171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.823732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.823914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.823934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.827443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.827627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.827647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.831339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.831520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.831539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.835162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.835346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.835366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.839043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.839228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.839248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.843012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.843190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.843209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.624 [2024-12-09 16:00:09.846992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.624 [2024-12-09 16:00:09.847179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.624 [2024-12-09 16:00:09.847198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.885 [2024-12-09 16:00:09.850995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.885 [2024-12-09 16:00:09.851173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.885 [2024-12-09 16:00:09.851192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.885 [2024-12-09 16:00:09.855056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.885 [2024-12-09 16:00:09.855246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.885 [2024-12-09 16:00:09.855265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.885 [2024-12-09 16:00:09.858993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.885 [2024-12-09 16:00:09.859169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.885 [2024-12-09 16:00:09.859189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.885 [2024-12-09 16:00:09.862982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.885 [2024-12-09 16:00:09.863160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.885 [2024-12-09 16:00:09.863179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.885 [2024-12-09 16:00:09.866989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.885 [2024-12-09 16:00:09.867170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.885 [2024-12-09 16:00:09.867189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.885 [2024-12-09 16:00:09.870882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.885 [2024-12-09 16:00:09.871056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.885 [2024-12-09 16:00:09.871075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.885 [2024-12-09 16:00:09.874755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.885 [2024-12-09 16:00:09.874935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.885 [2024-12-09 16:00:09.874954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.885 [2024-12-09 16:00:09.878652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.885 [2024-12-09 16:00:09.878829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.885 [2024-12-09 16:00:09.878849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.885 [2024-12-09 16:00:09.882445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.885 [2024-12-09 16:00:09.882625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.885 [2024-12-09 16:00:09.882644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.885 [2024-12-09 16:00:09.886266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.885 [2024-12-09 16:00:09.886442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.885 [2024-12-09 16:00:09.886461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.885 [2024-12-09 16:00:09.890251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.885 [2024-12-09 16:00:09.890440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.885 [2024-12-09 16:00:09.890458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.885 [2024-12-09 16:00:09.893952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.885 [2024-12-09 16:00:09.894136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.885 [2024-12-09 16:00:09.894159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.885 [2024-12-09 16:00:09.897639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.885 [2024-12-09 16:00:09.897819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.885 [2024-12-09 16:00:09.897839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.885 [2024-12-09 16:00:09.901280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.885 [2024-12-09 16:00:09.901462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.885 [2024-12-09 16:00:09.901482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.885 [2024-12-09 16:00:09.904926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.885 [2024-12-09 16:00:09.905104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.885 [2024-12-09 16:00:09.905124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.885 [2024-12-09 16:00:09.908651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.885 [2024-12-09 16:00:09.908828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.885 [2024-12-09 16:00:09.908848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.912526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.912706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.912726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.916752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.916933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.916953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.921088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.921276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.921295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.925326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.925501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.925520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.929169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.929364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.929384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.933543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.933721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.933741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.937880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.938061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.938080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.941950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.942129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.942149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.945946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.946122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.946142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.949679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.949855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.949874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.953696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.953876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.953895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.957646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.957830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.957849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.961609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.961788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.961807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.965571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.965749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.965769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.969508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.969689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.969709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.973451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.973630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.973650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.977434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.977615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.977635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.981184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.981369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.981388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.985011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.985191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.985211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.989454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.989630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.989650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.994115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.994299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.994319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:09.998301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:09.998484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:09.998507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:10.002554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:10.002749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:10.002769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:10.006603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:10.006802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:10.006822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:10.010597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:10.010776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:10.010796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:10.014911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:10.015099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:10.015119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:10.019346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:10.019525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:10.019545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.886 [2024-12-09 16:00:10.023514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.886 [2024-12-09 16:00:10.023694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.886 [2024-12-09 16:00:10.023714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.027952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.028152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.028172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.032586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.032768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.032787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.037036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.037226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.037250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.041008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.041208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.041235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.044834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.045016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.045037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.048620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.048805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.048825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.052445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.052631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.052651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.056244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.056429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.056448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.060114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.060302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.060322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.063877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.064064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.064084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.067710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.067893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.067913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.071659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.071840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.071860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.076570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.076748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.076768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.081068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.081254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.081274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.084977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.085152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.085172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.088998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.089181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.089200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.092806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.092985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.093005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.096647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.096829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.096849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.100924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.101105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.101125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.105595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.105783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.105805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.887 [2024-12-09 16:00:10.110270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:14.887 [2024-12-09 16:00:10.110449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.887 [2024-12-09 16:00:10.110469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.148 [2024-12-09 16:00:10.114517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.148 [2024-12-09 16:00:10.114702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.148 [2024-12-09 16:00:10.114722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.148 [2024-12-09 16:00:10.119476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.148 [2024-12-09 16:00:10.119653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.148 [2024-12-09 16:00:10.119673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.148 [2024-12-09 16:00:10.124171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.148 [2024-12-09 16:00:10.124356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.148 [2024-12-09 16:00:10.124376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.148 [2024-12-09 16:00:10.128743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.148 [2024-12-09 16:00:10.128921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.148 [2024-12-09 16:00:10.128942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.148 [2024-12-09 16:00:10.132838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.148 [2024-12-09 16:00:10.133021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.148 [2024-12-09 16:00:10.133041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.148 [2024-12-09 16:00:10.137494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.148 [2024-12-09 16:00:10.137678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.148 [2024-12-09 16:00:10.137698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.148 [2024-12-09 16:00:10.142022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.148 [2024-12-09 16:00:10.142198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.148 [2024-12-09 16:00:10.142224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.148 [2024-12-09 16:00:10.146744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.148 [2024-12-09 16:00:10.146923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.148 [2024-12-09 16:00:10.146948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.148 [2024-12-09 16:00:10.151453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.148 [2024-12-09 16:00:10.151633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.148 [2024-12-09 16:00:10.151653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.148 [2024-12-09 16:00:10.155994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.148 [2024-12-09 16:00:10.156175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.148 [2024-12-09 16:00:10.156195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.148 [2024-12-09 16:00:10.160168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.148 [2024-12-09 16:00:10.160371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.148 [2024-12-09 16:00:10.160391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.148 [2024-12-09 16:00:10.164282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.148 [2024-12-09 16:00:10.164462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.148 [2024-12-09 16:00:10.164481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.148 [2024-12-09 16:00:10.168213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.148 [2024-12-09 16:00:10.168398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.148 [2024-12-09 16:00:10.168418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.148 [2024-12-09 16:00:10.172216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.148 [2024-12-09 16:00:10.172406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.172426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.176262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.176449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.176469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.180357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.180536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.180555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.184268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.184453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.184472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.188286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.188464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.188483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.192555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.192730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.192749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.197348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.197531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.197551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.201802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.201982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.202002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.205912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.206098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.206118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.209949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.210130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.210149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.213959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.214137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.214156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.217974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.218152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.218171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.222842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.223023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.223042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.227286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.227465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.227485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.231416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.231597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.231616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.235751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.235933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.235952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.240877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.241058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.241077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.244998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.245180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.245200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.249025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.249207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.249232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.252992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.253171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.253191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.257679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.257868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.257892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.263037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.263351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.263371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.269362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.269638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.269658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.275407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.275599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.275618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.280168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.280378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.280398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.284459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.284641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.284660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.288516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.288696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.288715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.292812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.293011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.149 [2024-12-09 16:00:10.293031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.149 [2024-12-09 16:00:10.296904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.149 [2024-12-09 16:00:10.297087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.297107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.301002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.301192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.301212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.305069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.305257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.305277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.309122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.309307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.309327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.313131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.313317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.313337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.317188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.317380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.317399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.321380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.321567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.321587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.325377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.325564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.325583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.329445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.329628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.329648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.333621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.333806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.333826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.338671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.338859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.338878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.343318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.343500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.343520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.347351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.347532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.347551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.351405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.351582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.351602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.355382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.355561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.355580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.359325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.359513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.359533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.363370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.363548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.363568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.367690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.367873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.367892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.150 [2024-12-09 16:00:10.372201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.150 [2024-12-09 16:00:10.372395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.150 [2024-12-09 16:00:10.372419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.410 [2024-12-09 16:00:10.376666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.410 [2024-12-09 16:00:10.376840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.376860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.381321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.381504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.381524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.385527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.385708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.385727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.389728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.389905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.389924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.394543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.394728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.394748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.399272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.399450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.399469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.404072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.404252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.404271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.408693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.408868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.408888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.413133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.413325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.413346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.417243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.417427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.417448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.421243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.421428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.421449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.425279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.425460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.425479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.429328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.429505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.429524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.433521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.433698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.433717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.437632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.437811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.437831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.441661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.441840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.441860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.445596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.445776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.445796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.449644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.449828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.449847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.453913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.454090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.454109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.458627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.458808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.458827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.462768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.462947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.462967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.466878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.467060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.467079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.471138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.471340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.471361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.475135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.475319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.475338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.479183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.479390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.479411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.483173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.483373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.483400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.487132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.487331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.487351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.491018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.411 [2024-12-09 16:00:10.491194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.411 [2024-12-09 16:00:10.491214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.411 [2024-12-09 16:00:10.494933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.412 [2024-12-09 16:00:10.495115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.412 [2024-12-09 16:00:10.495134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.412 [2024-12-09 16:00:10.499370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.412 [2024-12-09 16:00:10.499546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.412 [2024-12-09 16:00:10.499565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.412 [2024-12-09 16:00:10.503917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.412 [2024-12-09 16:00:10.504093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.412 [2024-12-09 16:00:10.504113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.412 [2024-12-09 16:00:10.508596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.412 [2024-12-09 16:00:10.508776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.412 [2024-12-09 16:00:10.508796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.412 [2024-12-09 16:00:10.512836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.412 [2024-12-09 16:00:10.513013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.412 [2024-12-09 16:00:10.513033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.412 [2024-12-09 16:00:10.516668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.412 [2024-12-09 16:00:10.516844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.412 [2024-12-09 16:00:10.516863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.412 [2024-12-09 16:00:10.520360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.412 [2024-12-09 16:00:10.520542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.412 [2024-12-09 16:00:10.520562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.412 [2024-12-09 16:00:10.524063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.412 [2024-12-09 16:00:10.524249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.412 [2024-12-09 16:00:10.524269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.412 [2024-12-09 16:00:10.528167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.412 [2024-12-09 16:00:10.528374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.412 [2024-12-09 16:00:10.528394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.412 [2024-12-09 16:00:10.531787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x521a00) with pdu=0x200016eff3c8 00:27:15.412 [2024-12-09 16:00:10.531968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.412 [2024-12-09 16:00:10.531987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.412 6789.00 IOPS, 848.62 MiB/s 00:27:15.412 Latency(us) 00:27:15.412 [2024-12-09T15:00:10.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.412 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:15.412 nvme0n1 : 2.00 6786.89 848.36 0.00 0.00 2353.48 1318.52 7739.49 00:27:15.412 [2024-12-09T15:00:10.640Z] =================================================================================================================== 00:27:15.412 [2024-12-09T15:00:10.640Z] Total : 6786.89 848.36 0.00 0.00 2353.48 1318.52 7739.49 00:27:15.412 { 00:27:15.412 "results": [ 00:27:15.412 { 00:27:15.412 "job": "nvme0n1", 00:27:15.412 "core_mask": "0x2", 00:27:15.412 "workload": "randwrite", 00:27:15.412 "status": "finished", 00:27:15.412 "queue_depth": 16, 00:27:15.412 "io_size": 131072, 00:27:15.412 "runtime": 2.003421, 00:27:15.412 "iops": 6786.891022905321, 00:27:15.412 "mibps": 848.3613778631651, 00:27:15.412 "io_failed": 0, 00:27:15.412 "io_timeout": 0, 00:27:15.412 "avg_latency_us": 2353.4795442972363, 00:27:15.412 "min_latency_us": 1318.5219047619048, 00:27:15.412 "max_latency_us": 7739.489523809524 00:27:15.412 } 00:27:15.412 ], 00:27:15.412 "core_count": 1 00:27:15.412 } 00:27:15.412 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:15.412 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:15.412 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:15.412 | .driver_specific 00:27:15.412 | .nvme_error 00:27:15.412 | .status_code 00:27:15.412 | .command_transient_transport_error' 00:27:15.412 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:15.671 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 439 > 0 )) 00:27:15.671 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2147376 00:27:15.671 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2147376 ']' 00:27:15.671 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2147376 00:27:15.671 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:15.671 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:15.671 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2147376 00:27:15.671 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:15.671 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:15.671 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2147376' 00:27:15.672 killing process with pid 2147376 00:27:15.672 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2147376 00:27:15.672 Received shutdown signal, test time was about 2.000000 seconds 00:27:15.672 00:27:15.672 Latency(us) 00:27:15.672 [2024-12-09T15:00:10.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.672 [2024-12-09T15:00:10.900Z] =================================================================================================================== 00:27:15.672 [2024-12-09T15:00:10.900Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:15.672 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2147376 00:27:15.931 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2145232 00:27:15.931 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2145232 ']' 00:27:15.931 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2145232 00:27:15.931 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:15.931 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:15.931 16:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2145232 00:27:15.931 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:15.931 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:15.931 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2145232' 00:27:15.931 killing process with pid 2145232 00:27:15.931 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2145232 00:27:15.931 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2145232 00:27:16.189 00:27:16.189 real 0m14.034s 00:27:16.189 user 0m26.751s 00:27:16.189 sys 0m4.611s 00:27:16.189 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:16.189 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:16.189 ************************************ 00:27:16.189 END TEST nvmf_digest_error 00:27:16.189 ************************************ 00:27:16.189 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:16.189 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:16.189 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:16.189 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:16.189 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:16.189 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:16.189 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:16.190 rmmod nvme_tcp 00:27:16.190 rmmod nvme_fabrics 00:27:16.190 rmmod nvme_keyring 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2145232 ']' 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2145232 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2145232 ']' 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2145232 00:27:16.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2145232) - No such process 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2145232 is not found' 00:27:16.190 Process with pid 2145232 is not found 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.190 16:00:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:18.727 00:27:18.727 real 0m36.580s 00:27:18.727 user 0m55.863s 00:27:18.727 sys 0m13.656s 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:18.727 ************************************ 00:27:18.727 END TEST nvmf_digest 00:27:18.727 ************************************ 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.727 ************************************ 00:27:18.727 START TEST nvmf_bdevperf 00:27:18.727 ************************************ 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:18.727 * Looking for test storage... 00:27:18.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:18.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.727 --rc genhtml_branch_coverage=1 00:27:18.727 --rc genhtml_function_coverage=1 00:27:18.727 --rc genhtml_legend=1 00:27:18.727 --rc geninfo_all_blocks=1 00:27:18.727 --rc geninfo_unexecuted_blocks=1 00:27:18.727 00:27:18.727 ' 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:18.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.727 --rc genhtml_branch_coverage=1 00:27:18.727 --rc genhtml_function_coverage=1 00:27:18.727 --rc genhtml_legend=1 00:27:18.727 --rc geninfo_all_blocks=1 00:27:18.727 --rc geninfo_unexecuted_blocks=1 00:27:18.727 00:27:18.727 ' 00:27:18.727 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:18.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.727 --rc genhtml_branch_coverage=1 00:27:18.727 --rc genhtml_function_coverage=1 00:27:18.727 --rc genhtml_legend=1 00:27:18.727 --rc geninfo_all_blocks=1 00:27:18.727 --rc geninfo_unexecuted_blocks=1 00:27:18.728 00:27:18.728 ' 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:18.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.728 --rc genhtml_branch_coverage=1 00:27:18.728 --rc genhtml_function_coverage=1 00:27:18.728 --rc genhtml_legend=1 00:27:18.728 --rc geninfo_all_blocks=1 00:27:18.728 --rc geninfo_unexecuted_blocks=1 00:27:18.728 00:27:18.728 ' 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:18.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:18.728 16:00:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.003 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:24.004 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.004 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:24.263 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:24.263 Found net devices under 0000:af:00.0: cvl_0_0 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:24.263 Found net devices under 0000:af:00.1: cvl_0_1 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.263 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:24.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:27:24.264 00:27:24.264 --- 10.0.0.2 ping statistics --- 00:27:24.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.264 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:27:24.264 00:27:24.264 --- 10.0.0.1 ping statistics --- 00:27:24.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.264 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:24.264 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:24.523 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:24.523 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:24.523 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:24.523 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:24.523 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.523 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2151413 00:27:24.523 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:24.523 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2151413 00:27:24.523 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2151413 ']' 00:27:24.523 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.523 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:24.523 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.523 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:24.523 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.523 [2024-12-09 16:00:19.559159] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:27:24.523 [2024-12-09 16:00:19.559203] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.523 [2024-12-09 16:00:19.638182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:24.523 [2024-12-09 16:00:19.678598] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.523 [2024-12-09 16:00:19.678635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.523 [2024-12-09 16:00:19.678642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.523 [2024-12-09 16:00:19.678651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.523 [2024-12-09 16:00:19.678656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.523 [2024-12-09 16:00:19.680056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.523 [2024-12-09 16:00:19.680165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.523 [2024-12-09 16:00:19.680165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.783 [2024-12-09 16:00:19.815825] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.783 Malloc0 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.783 [2024-12-09 16:00:19.877563] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:24.783 { 00:27:24.783 "params": { 00:27:24.783 "name": "Nvme$subsystem", 00:27:24.783 "trtype": "$TEST_TRANSPORT", 00:27:24.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.783 "adrfam": "ipv4", 00:27:24.783 "trsvcid": "$NVMF_PORT", 00:27:24.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.783 "hdgst": ${hdgst:-false}, 00:27:24.783 "ddgst": ${ddgst:-false} 00:27:24.783 }, 00:27:24.783 "method": "bdev_nvme_attach_controller" 00:27:24.783 } 00:27:24.783 EOF 00:27:24.783 )") 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:24.783 16:00:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:24.783 "params": { 00:27:24.783 "name": "Nvme1", 00:27:24.783 "trtype": "tcp", 00:27:24.783 "traddr": "10.0.0.2", 00:27:24.783 "adrfam": "ipv4", 00:27:24.783 "trsvcid": "4420", 00:27:24.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:24.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:24.783 "hdgst": false, 00:27:24.783 "ddgst": false 00:27:24.783 }, 00:27:24.783 "method": "bdev_nvme_attach_controller" 00:27:24.783 }' 00:27:24.783 [2024-12-09 16:00:19.927886] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:27:24.783 [2024-12-09 16:00:19.927929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151593 ] 00:27:24.783 [2024-12-09 16:00:20.002780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.042 [2024-12-09 16:00:20.045391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.042 Running I/O for 1 seconds... 00:27:26.415 11234.00 IOPS, 43.88 MiB/s 00:27:26.415 Latency(us) 00:27:26.415 [2024-12-09T15:00:21.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.415 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:26.415 Verification LBA range: start 0x0 length 0x4000 00:27:26.415 Nvme1n1 : 1.01 11328.06 44.25 0.00 0.00 11247.05 1287.31 11858.90 00:27:26.415 [2024-12-09T15:00:21.643Z] =================================================================================================================== 00:27:26.415 [2024-12-09T15:00:21.643Z] Total : 11328.06 44.25 0.00 0.00 11247.05 1287.31 11858.90 00:27:26.415 16:00:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2151821 00:27:26.415 16:00:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:26.415 16:00:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:26.415 16:00:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:26.415 16:00:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:26.415 16:00:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:26.415 16:00:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:26.415 16:00:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:26.415 { 00:27:26.415 "params": { 00:27:26.415 "name": "Nvme$subsystem", 00:27:26.415 "trtype": "$TEST_TRANSPORT", 00:27:26.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.415 "adrfam": "ipv4", 00:27:26.415 "trsvcid": "$NVMF_PORT", 00:27:26.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.416 "hdgst": ${hdgst:-false}, 00:27:26.416 "ddgst": ${ddgst:-false} 00:27:26.416 }, 00:27:26.416 "method": "bdev_nvme_attach_controller" 00:27:26.416 } 00:27:26.416 EOF 00:27:26.416 )") 00:27:26.416 16:00:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:26.416 16:00:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:26.416 16:00:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:26.416 16:00:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:26.416 "params": { 00:27:26.416 "name": "Nvme1", 00:27:26.416 "trtype": "tcp", 00:27:26.416 "traddr": "10.0.0.2", 00:27:26.416 "adrfam": "ipv4", 00:27:26.416 "trsvcid": "4420", 00:27:26.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:26.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:26.416 "hdgst": false, 00:27:26.416 "ddgst": false 00:27:26.416 }, 00:27:26.416 "method": "bdev_nvme_attach_controller" 00:27:26.416 }' 00:27:26.416 [2024-12-09 16:00:21.420363] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:27:26.416 [2024-12-09 16:00:21.420413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151821 ] 00:27:26.416 [2024-12-09 16:00:21.496221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.416 [2024-12-09 16:00:21.532892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.674 Running I/O for 15 seconds... 00:27:28.545 11513.00 IOPS, 44.97 MiB/s [2024-12-09T15:00:24.712Z] 11445.50 IOPS, 44.71 MiB/s [2024-12-09T15:00:24.712Z] 16:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2151413 00:27:29.484 16:00:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:29.484 [2024-12-09 16:00:24.387626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.387988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.387997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.388006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.484 [2024-12-09 16:00:24.388013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.484 [2024-12-09 16:00:24.388021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.485 [2024-12-09 16:00:24.388632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.485 [2024-12-09 16:00:24.388638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.388653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.388668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.388682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.388696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.388711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.388726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.388741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.388755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.388769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.388785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.388800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.388814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.388831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.388846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.388860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.388874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.388890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.486 [2024-12-09 16:00:24.388904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.486 [2024-12-09 16:00:24.388919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.486 [2024-12-09 16:00:24.388932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.486 [2024-12-09 16:00:24.388948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.486 [2024-12-09 16:00:24.388962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.486 [2024-12-09 16:00:24.388978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.486 [2024-12-09 16:00:24.388991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.388999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.389011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.389019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.389026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.389034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.389040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.389048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.389055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.389063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.389070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.389078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.389085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.389093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.389100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.389108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.389115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.389123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.486 [2024-12-09 16:00:24.389130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.389139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.486 [2024-12-09 16:00:24.389145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.389153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.486 [2024-12-09 16:00:24.389159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.389167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.486 [2024-12-09 16:00:24.389174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.389182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.486 [2024-12-09 16:00:24.389189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.389199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.486 [2024-12-09 16:00:24.389205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.389214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.486 [2024-12-09 16:00:24.389331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.389340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.486 [2024-12-09 16:00:24.389347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.486 [2024-12-09 16:00:24.389355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.486 [2024-12-09 16:00:24.389362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.487 [2024-12-09 16:00:24.389481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.487 [2024-12-09 16:00:24.389733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.389742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c7f0 is same with the state(6) to be set 00:27:29.487 [2024-12-09 16:00:24.389751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:29.487 [2024-12-09 16:00:24.389756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:29.487 [2024-12-09 16:00:24.389762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114544 len:8 PRP1 0x0 PRP2 0x0 00:27:29.487 [2024-12-09 16:00:24.389770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.487 [2024-12-09 16:00:24.392580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.487 [2024-12-09 16:00:24.392633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.487 [2024-12-09 16:00:24.393191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.487 [2024-12-09 16:00:24.393208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.487 [2024-12-09 16:00:24.393216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.487 [2024-12-09 16:00:24.393397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.487 [2024-12-09 16:00:24.393572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.487 [2024-12-09 16:00:24.393580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.487 [2024-12-09 16:00:24.393588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.487 [2024-12-09 16:00:24.393596] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.487 [2024-12-09 16:00:24.405663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.487 [2024-12-09 16:00:24.406029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.487 [2024-12-09 16:00:24.406049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.487 [2024-12-09 16:00:24.406058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.487 [2024-12-09 16:00:24.406236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.487 [2024-12-09 16:00:24.406411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.487 [2024-12-09 16:00:24.406421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.487 [2024-12-09 16:00:24.406428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.487 [2024-12-09 16:00:24.406436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.487 [2024-12-09 16:00:24.418504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.487 [2024-12-09 16:00:24.418915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.487 [2024-12-09 16:00:24.418961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.487 [2024-12-09 16:00:24.418985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.487 [2024-12-09 16:00:24.419582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.487 [2024-12-09 16:00:24.420127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.487 [2024-12-09 16:00:24.420137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.487 [2024-12-09 16:00:24.420142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.487 [2024-12-09 16:00:24.420149] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.487 [2024-12-09 16:00:24.431399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.487 [2024-12-09 16:00:24.431761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.488 [2024-12-09 16:00:24.431817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.488 [2024-12-09 16:00:24.431841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.488 [2024-12-09 16:00:24.432439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.488 [2024-12-09 16:00:24.433029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.488 [2024-12-09 16:00:24.433055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.488 [2024-12-09 16:00:24.433076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.488 [2024-12-09 16:00:24.433095] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.488 [2024-12-09 16:00:24.444128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.488 [2024-12-09 16:00:24.444524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.488 [2024-12-09 16:00:24.444541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.488 [2024-12-09 16:00:24.444549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.488 [2024-12-09 16:00:24.444708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.488 [2024-12-09 16:00:24.444868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.488 [2024-12-09 16:00:24.444877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.488 [2024-12-09 16:00:24.444887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.488 [2024-12-09 16:00:24.444893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.488 [2024-12-09 16:00:24.456967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.488 [2024-12-09 16:00:24.457384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.488 [2024-12-09 16:00:24.457428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.488 [2024-12-09 16:00:24.457454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.488 [2024-12-09 16:00:24.457995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.488 [2024-12-09 16:00:24.458157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.488 [2024-12-09 16:00:24.458165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.488 [2024-12-09 16:00:24.458171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.488 [2024-12-09 16:00:24.458177] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.488 [2024-12-09 16:00:24.469757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.488 [2024-12-09 16:00:24.470182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.488 [2024-12-09 16:00:24.470200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.488 [2024-12-09 16:00:24.470207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.488 [2024-12-09 16:00:24.470395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.488 [2024-12-09 16:00:24.470566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.488 [2024-12-09 16:00:24.470576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.488 [2024-12-09 16:00:24.470583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.488 [2024-12-09 16:00:24.470590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.488 [2024-12-09 16:00:24.482579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.488 [2024-12-09 16:00:24.483004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.488 [2024-12-09 16:00:24.483022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.488 [2024-12-09 16:00:24.483029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.488 [2024-12-09 16:00:24.483189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.488 [2024-12-09 16:00:24.483378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.488 [2024-12-09 16:00:24.483388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.488 [2024-12-09 16:00:24.483394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.488 [2024-12-09 16:00:24.483400] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.488 [2024-12-09 16:00:24.495422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.488 [2024-12-09 16:00:24.495856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.488 [2024-12-09 16:00:24.495872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.488 [2024-12-09 16:00:24.495880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.488 [2024-12-09 16:00:24.496040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.488 [2024-12-09 16:00:24.496201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.488 [2024-12-09 16:00:24.496210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.488 [2024-12-09 16:00:24.496223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.488 [2024-12-09 16:00:24.496229] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.488 [2024-12-09 16:00:24.508198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.488 [2024-12-09 16:00:24.508549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.488 [2024-12-09 16:00:24.508566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.488 [2024-12-09 16:00:24.508573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.488 [2024-12-09 16:00:24.508732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.488 [2024-12-09 16:00:24.508892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.488 [2024-12-09 16:00:24.508902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.488 [2024-12-09 16:00:24.508908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.488 [2024-12-09 16:00:24.508914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.488 [2024-12-09 16:00:24.520939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.488 [2024-12-09 16:00:24.521361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.488 [2024-12-09 16:00:24.521407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.488 [2024-12-09 16:00:24.521432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.488 [2024-12-09 16:00:24.522019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.488 [2024-12-09 16:00:24.522181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.488 [2024-12-09 16:00:24.522190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.488 [2024-12-09 16:00:24.522196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.488 [2024-12-09 16:00:24.522203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.488 [2024-12-09 16:00:24.533782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.488 [2024-12-09 16:00:24.534173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.488 [2024-12-09 16:00:24.534190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.488 [2024-12-09 16:00:24.534200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.488 [2024-12-09 16:00:24.534367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.488 [2024-12-09 16:00:24.534529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.488 [2024-12-09 16:00:24.534538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.489 [2024-12-09 16:00:24.534544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.489 [2024-12-09 16:00:24.534550] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.489 [2024-12-09 16:00:24.546614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.489 [2024-12-09 16:00:24.547056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.489 [2024-12-09 16:00:24.547101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.489 [2024-12-09 16:00:24.547125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.489 [2024-12-09 16:00:24.547656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.489 [2024-12-09 16:00:24.547826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.489 [2024-12-09 16:00:24.547834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.489 [2024-12-09 16:00:24.547840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.489 [2024-12-09 16:00:24.547846] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.489 [2024-12-09 16:00:24.559518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.489 [2024-12-09 16:00:24.559924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.489 [2024-12-09 16:00:24.559942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.489 [2024-12-09 16:00:24.559949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.489 [2024-12-09 16:00:24.560110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.489 [2024-12-09 16:00:24.560293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.489 [2024-12-09 16:00:24.560303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.489 [2024-12-09 16:00:24.560310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.489 [2024-12-09 16:00:24.560317] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.489 [2024-12-09 16:00:24.572250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.489 [2024-12-09 16:00:24.572668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.489 [2024-12-09 16:00:24.572685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.489 [2024-12-09 16:00:24.572692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.489 [2024-12-09 16:00:24.572852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.489 [2024-12-09 16:00:24.573012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.489 [2024-12-09 16:00:24.573024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.489 [2024-12-09 16:00:24.573031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.489 [2024-12-09 16:00:24.573037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.489 [2024-12-09 16:00:24.585058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.489 [2024-12-09 16:00:24.585493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.489 [2024-12-09 16:00:24.585539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.489 [2024-12-09 16:00:24.585564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.489 [2024-12-09 16:00:24.586160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.489 [2024-12-09 16:00:24.586349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.489 [2024-12-09 16:00:24.586359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.489 [2024-12-09 16:00:24.586365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.489 [2024-12-09 16:00:24.586372] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.489 [2024-12-09 16:00:24.597810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.489 [2024-12-09 16:00:24.598210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.489 [2024-12-09 16:00:24.598232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.489 [2024-12-09 16:00:24.598239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.489 [2024-12-09 16:00:24.598399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.489 [2024-12-09 16:00:24.598561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.489 [2024-12-09 16:00:24.598569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.489 [2024-12-09 16:00:24.598576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.489 [2024-12-09 16:00:24.598582] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.489 [2024-12-09 16:00:24.610811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.489 [2024-12-09 16:00:24.611238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.489 [2024-12-09 16:00:24.611257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.489 [2024-12-09 16:00:24.611265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.489 [2024-12-09 16:00:24.611440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.489 [2024-12-09 16:00:24.611614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.489 [2024-12-09 16:00:24.611624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.489 [2024-12-09 16:00:24.611631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.489 [2024-12-09 16:00:24.611640] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.489 [2024-12-09 16:00:24.623788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.489 [2024-12-09 16:00:24.624215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.489 [2024-12-09 16:00:24.624295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.489 [2024-12-09 16:00:24.624320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.489 [2024-12-09 16:00:24.624848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.489 [2024-12-09 16:00:24.625023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.489 [2024-12-09 16:00:24.625034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.489 [2024-12-09 16:00:24.625040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.489 [2024-12-09 16:00:24.625048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.489 [2024-12-09 16:00:24.636832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.489 [2024-12-09 16:00:24.637167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.489 [2024-12-09 16:00:24.637185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.489 [2024-12-09 16:00:24.637193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.489 [2024-12-09 16:00:24.637375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.489 [2024-12-09 16:00:24.637559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.489 [2024-12-09 16:00:24.637569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.489 [2024-12-09 16:00:24.637575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.489 [2024-12-09 16:00:24.637582] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.489 [2024-12-09 16:00:24.649851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.489 [2024-12-09 16:00:24.650284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.489 [2024-12-09 16:00:24.650301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.489 [2024-12-09 16:00:24.650310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.489 [2024-12-09 16:00:24.650480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.489 [2024-12-09 16:00:24.650650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.489 [2024-12-09 16:00:24.650660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.489 [2024-12-09 16:00:24.650667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.489 [2024-12-09 16:00:24.650675] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.489 [2024-12-09 16:00:24.662907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.489 [2024-12-09 16:00:24.663337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.489 [2024-12-09 16:00:24.663355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.489 [2024-12-09 16:00:24.663363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.489 [2024-12-09 16:00:24.663537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.489 [2024-12-09 16:00:24.663712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.489 [2024-12-09 16:00:24.663722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.489 [2024-12-09 16:00:24.663729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.489 [2024-12-09 16:00:24.663735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.490 [2024-12-09 16:00:24.675874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.490 [2024-12-09 16:00:24.676295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.490 [2024-12-09 16:00:24.676313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.490 [2024-12-09 16:00:24.676321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.490 [2024-12-09 16:00:24.676495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.490 [2024-12-09 16:00:24.676669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.490 [2024-12-09 16:00:24.676679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.490 [2024-12-09 16:00:24.676686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.490 [2024-12-09 16:00:24.676692] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.490 [2024-12-09 16:00:24.688763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.490 [2024-12-09 16:00:24.689123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.490 [2024-12-09 16:00:24.689141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.490 [2024-12-09 16:00:24.689149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.490 [2024-12-09 16:00:24.689343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.490 [2024-12-09 16:00:24.689519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.490 [2024-12-09 16:00:24.689529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.490 [2024-12-09 16:00:24.689535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.490 [2024-12-09 16:00:24.689542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.490 [2024-12-09 16:00:24.701541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.490 [2024-12-09 16:00:24.701979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.490 [2024-12-09 16:00:24.702027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.490 [2024-12-09 16:00:24.702051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.490 [2024-12-09 16:00:24.702622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.490 [2024-12-09 16:00:24.702794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.490 [2024-12-09 16:00:24.702803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.490 [2024-12-09 16:00:24.702810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.490 [2024-12-09 16:00:24.702817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.750 10221.00 IOPS, 39.93 MiB/s [2024-12-09T15:00:24.978Z] [2024-12-09 16:00:24.714336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.750 [2024-12-09 16:00:24.714738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-12-09 16:00:24.714756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.750 [2024-12-09 16:00:24.714765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.750 [2024-12-09 16:00:24.714925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.750 [2024-12-09 16:00:24.715086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.750 [2024-12-09 16:00:24.715095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.750 [2024-12-09 16:00:24.715102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.750 [2024-12-09 16:00:24.715108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.750 [2024-12-09 16:00:24.727125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.750 [2024-12-09 16:00:24.727551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-12-09 16:00:24.727569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.750 [2024-12-09 16:00:24.727576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.750 [2024-12-09 16:00:24.727736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.750 [2024-12-09 16:00:24.727897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.750 [2024-12-09 16:00:24.727906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.750 [2024-12-09 16:00:24.727912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.750 [2024-12-09 16:00:24.727918] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.750 [2024-12-09 16:00:24.739996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.750 [2024-12-09 16:00:24.740338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-12-09 16:00:24.740387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.750 [2024-12-09 16:00:24.740411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.750 [2024-12-09 16:00:24.740893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.750 [2024-12-09 16:00:24.741053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.750 [2024-12-09 16:00:24.741064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.750 [2024-12-09 16:00:24.741071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.750 [2024-12-09 16:00:24.741077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.750 [2024-12-09 16:00:24.752861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.750 [2024-12-09 16:00:24.753212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-12-09 16:00:24.753234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.750 [2024-12-09 16:00:24.753241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.750 [2024-12-09 16:00:24.753402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.750 [2024-12-09 16:00:24.753563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.750 [2024-12-09 16:00:24.753572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.750 [2024-12-09 16:00:24.753579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.750 [2024-12-09 16:00:24.753585] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.750 [2024-12-09 16:00:24.765630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.750 [2024-12-09 16:00:24.766045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-12-09 16:00:24.766090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.750 [2024-12-09 16:00:24.766115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.750 [2024-12-09 16:00:24.766680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.750 [2024-12-09 16:00:24.766853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.750 [2024-12-09 16:00:24.766862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.750 [2024-12-09 16:00:24.766869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.750 [2024-12-09 16:00:24.766875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.750 [2024-12-09 16:00:24.778443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.750 [2024-12-09 16:00:24.778865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.750 [2024-12-09 16:00:24.778910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.750 [2024-12-09 16:00:24.778934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.750 [2024-12-09 16:00:24.779533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.750 [2024-12-09 16:00:24.779989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.750 [2024-12-09 16:00:24.779999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.750 [2024-12-09 16:00:24.780006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.750 [2024-12-09 16:00:24.780018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.750 [2024-12-09 16:00:24.791239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.751 [2024-12-09 16:00:24.791660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-12-09 16:00:24.791706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.751 [2024-12-09 16:00:24.791730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.751 [2024-12-09 16:00:24.792180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.751 [2024-12-09 16:00:24.792394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.751 [2024-12-09 16:00:24.792405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.751 [2024-12-09 16:00:24.792411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.751 [2024-12-09 16:00:24.792417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.751 [2024-12-09 16:00:24.804151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.751 [2024-12-09 16:00:24.804525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-12-09 16:00:24.804542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.751 [2024-12-09 16:00:24.804551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.751 [2024-12-09 16:00:24.804712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.751 [2024-12-09 16:00:24.804872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.751 [2024-12-09 16:00:24.804883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.751 [2024-12-09 16:00:24.804889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.751 [2024-12-09 16:00:24.804895] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.751 [2024-12-09 16:00:24.817111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.751 [2024-12-09 16:00:24.817481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-12-09 16:00:24.817499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.751 [2024-12-09 16:00:24.817506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.751 [2024-12-09 16:00:24.817666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.751 [2024-12-09 16:00:24.817842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.751 [2024-12-09 16:00:24.817853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.751 [2024-12-09 16:00:24.817859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.751 [2024-12-09 16:00:24.817866] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.751 [2024-12-09 16:00:24.830007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.751 [2024-12-09 16:00:24.830437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-12-09 16:00:24.830482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.751 [2024-12-09 16:00:24.830507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.751 [2024-12-09 16:00:24.831009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.751 [2024-12-09 16:00:24.831172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.751 [2024-12-09 16:00:24.831181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.751 [2024-12-09 16:00:24.831189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.751 [2024-12-09 16:00:24.831196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.751 [2024-12-09 16:00:24.842759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.751 [2024-12-09 16:00:24.843112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-12-09 16:00:24.843129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.751 [2024-12-09 16:00:24.843137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.751 [2024-12-09 16:00:24.843320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.751 [2024-12-09 16:00:24.843499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.751 [2024-12-09 16:00:24.843508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.751 [2024-12-09 16:00:24.843515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.751 [2024-12-09 16:00:24.843521] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.751 [2024-12-09 16:00:24.855667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.751 [2024-12-09 16:00:24.856084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-12-09 16:00:24.856133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.751 [2024-12-09 16:00:24.856157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.751 [2024-12-09 16:00:24.856758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.751 [2024-12-09 16:00:24.857084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.751 [2024-12-09 16:00:24.857093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.751 [2024-12-09 16:00:24.857101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.751 [2024-12-09 16:00:24.857108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.751 [2024-12-09 16:00:24.868672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.751 [2024-12-09 16:00:24.869086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-12-09 16:00:24.869126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.751 [2024-12-09 16:00:24.869152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.751 [2024-12-09 16:00:24.869756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.751 [2024-12-09 16:00:24.870355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.751 [2024-12-09 16:00:24.870387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.751 [2024-12-09 16:00:24.870393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.751 [2024-12-09 16:00:24.870399] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.751 [2024-12-09 16:00:24.881676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.751 [2024-12-09 16:00:24.882067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-12-09 16:00:24.882084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.751 [2024-12-09 16:00:24.882091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.751 [2024-12-09 16:00:24.882259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.751 [2024-12-09 16:00:24.882420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.751 [2024-12-09 16:00:24.882430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.751 [2024-12-09 16:00:24.882436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.751 [2024-12-09 16:00:24.882442] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.751 [2024-12-09 16:00:24.894573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.751 [2024-12-09 16:00:24.894950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-12-09 16:00:24.894966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.751 [2024-12-09 16:00:24.894973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.751 [2024-12-09 16:00:24.895133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.751 [2024-12-09 16:00:24.895299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.751 [2024-12-09 16:00:24.895308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.751 [2024-12-09 16:00:24.895315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.751 [2024-12-09 16:00:24.895321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.751 [2024-12-09 16:00:24.907502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.751 [2024-12-09 16:00:24.907936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-12-09 16:00:24.907954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.751 [2024-12-09 16:00:24.907961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.751 [2024-12-09 16:00:24.908130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.751 [2024-12-09 16:00:24.908322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.751 [2024-12-09 16:00:24.908335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.751 [2024-12-09 16:00:24.908342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.751 [2024-12-09 16:00:24.908348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.751 [2024-12-09 16:00:24.920572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.751 [2024-12-09 16:00:24.921000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.751 [2024-12-09 16:00:24.921018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.752 [2024-12-09 16:00:24.921026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.752 [2024-12-09 16:00:24.921197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.752 [2024-12-09 16:00:24.921393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.752 [2024-12-09 16:00:24.921403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.752 [2024-12-09 16:00:24.921410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.752 [2024-12-09 16:00:24.921417] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.752 [2024-12-09 16:00:24.933512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.752 [2024-12-09 16:00:24.933918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-12-09 16:00:24.933935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.752 [2024-12-09 16:00:24.933943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.752 [2024-12-09 16:00:24.934112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.752 [2024-12-09 16:00:24.934289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.752 [2024-12-09 16:00:24.934299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.752 [2024-12-09 16:00:24.934306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.752 [2024-12-09 16:00:24.934313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.752 [2024-12-09 16:00:24.946301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.752 [2024-12-09 16:00:24.946709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-12-09 16:00:24.946726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.752 [2024-12-09 16:00:24.946734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.752 [2024-12-09 16:00:24.946894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.752 [2024-12-09 16:00:24.947055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.752 [2024-12-09 16:00:24.947064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.752 [2024-12-09 16:00:24.947071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.752 [2024-12-09 16:00:24.947080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.752 [2024-12-09 16:00:24.959113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.752 [2024-12-09 16:00:24.959534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-12-09 16:00:24.959579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.752 [2024-12-09 16:00:24.959603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.752 [2024-12-09 16:00:24.960143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.752 [2024-12-09 16:00:24.960330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.752 [2024-12-09 16:00:24.960338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.752 [2024-12-09 16:00:24.960345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.752 [2024-12-09 16:00:24.960351] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.752 [2024-12-09 16:00:24.971953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.752 [2024-12-09 16:00:24.972376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.752 [2024-12-09 16:00:24.972394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:29.752 [2024-12-09 16:00:24.972403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:29.752 [2024-12-09 16:00:24.972587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:29.752 [2024-12-09 16:00:24.972766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.752 [2024-12-09 16:00:24.972777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.752 [2024-12-09 16:00:24.972783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.752 [2024-12-09 16:00:24.972790] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.012 [2024-12-09 16:00:24.984922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.012 [2024-12-09 16:00:24.985350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-12-09 16:00:24.985368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.012 [2024-12-09 16:00:24.985376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.012 [2024-12-09 16:00:24.985555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.012 [2024-12-09 16:00:24.985717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.012 [2024-12-09 16:00:24.985727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.012 [2024-12-09 16:00:24.985733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.012 [2024-12-09 16:00:24.985740] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.012 [2024-12-09 16:00:24.997718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.012 [2024-12-09 16:00:24.998134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-12-09 16:00:24.998186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.012 [2024-12-09 16:00:24.998212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.012 [2024-12-09 16:00:24.998759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.012 [2024-12-09 16:00:24.998931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.012 [2024-12-09 16:00:24.998941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.012 [2024-12-09 16:00:24.998947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.012 [2024-12-09 16:00:24.998954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.012 [2024-12-09 16:00:25.010562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.012 [2024-12-09 16:00:25.010999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-12-09 16:00:25.011044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.012 [2024-12-09 16:00:25.011069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.012 [2024-12-09 16:00:25.011579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.012 [2024-12-09 16:00:25.011749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.012 [2024-12-09 16:00:25.011757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.012 [2024-12-09 16:00:25.011765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.012 [2024-12-09 16:00:25.011771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.012 [2024-12-09 16:00:25.023392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.012 [2024-12-09 16:00:25.023781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-12-09 16:00:25.023798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.012 [2024-12-09 16:00:25.023805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.012 [2024-12-09 16:00:25.023967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.012 [2024-12-09 16:00:25.024126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.012 [2024-12-09 16:00:25.024136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.012 [2024-12-09 16:00:25.024142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.012 [2024-12-09 16:00:25.024148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.012 [2024-12-09 16:00:25.036121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.012 [2024-12-09 16:00:25.036537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-12-09 16:00:25.036578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.012 [2024-12-09 16:00:25.036603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.012 [2024-12-09 16:00:25.037195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.012 [2024-12-09 16:00:25.037478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.012 [2024-12-09 16:00:25.037489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.012 [2024-12-09 16:00:25.037495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.012 [2024-12-09 16:00:25.037502] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.012 [2024-12-09 16:00:25.048899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.012 [2024-12-09 16:00:25.049310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.012 [2024-12-09 16:00:25.049348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.012 [2024-12-09 16:00:25.049374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.012 [2024-12-09 16:00:25.049957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.012 [2024-12-09 16:00:25.050560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.013 [2024-12-09 16:00:25.050588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.013 [2024-12-09 16:00:25.050608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.013 [2024-12-09 16:00:25.050636] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.013 [2024-12-09 16:00:25.061759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.013 [2024-12-09 16:00:25.062157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-12-09 16:00:25.062202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.013 [2024-12-09 16:00:25.062240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.013 [2024-12-09 16:00:25.062635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.013 [2024-12-09 16:00:25.062806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.013 [2024-12-09 16:00:25.062815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.013 [2024-12-09 16:00:25.062822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.013 [2024-12-09 16:00:25.062828] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.013 [2024-12-09 16:00:25.074498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.013 [2024-12-09 16:00:25.074910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-12-09 16:00:25.074955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.013 [2024-12-09 16:00:25.074979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.013 [2024-12-09 16:00:25.075521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.013 [2024-12-09 16:00:25.075684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.013 [2024-12-09 16:00:25.075694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.013 [2024-12-09 16:00:25.075701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.013 [2024-12-09 16:00:25.075707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.013 [2024-12-09 16:00:25.087318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.013 [2024-12-09 16:00:25.087716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-12-09 16:00:25.087734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.013 [2024-12-09 16:00:25.087742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.013 [2024-12-09 16:00:25.087910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.013 [2024-12-09 16:00:25.088079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.013 [2024-12-09 16:00:25.088089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.013 [2024-12-09 16:00:25.088096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.013 [2024-12-09 16:00:25.088102] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.013 [2024-12-09 16:00:25.100150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.013 [2024-12-09 16:00:25.100584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-12-09 16:00:25.100631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.013 [2024-12-09 16:00:25.100655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.013 [2024-12-09 16:00:25.101252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.013 [2024-12-09 16:00:25.101845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.013 [2024-12-09 16:00:25.101855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.013 [2024-12-09 16:00:25.101862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.013 [2024-12-09 16:00:25.101868] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.013 [2024-12-09 16:00:25.113099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.013 [2024-12-09 16:00:25.113500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-12-09 16:00:25.113547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.013 [2024-12-09 16:00:25.113571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.013 [2024-12-09 16:00:25.114155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.013 [2024-12-09 16:00:25.114758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.013 [2024-12-09 16:00:25.114788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.013 [2024-12-09 16:00:25.114812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.013 [2024-12-09 16:00:25.114819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.013 [2024-12-09 16:00:25.126014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.013 [2024-12-09 16:00:25.126434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-12-09 16:00:25.126451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.013 [2024-12-09 16:00:25.126460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.013 [2024-12-09 16:00:25.126621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.013 [2024-12-09 16:00:25.126783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.013 [2024-12-09 16:00:25.126793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.013 [2024-12-09 16:00:25.126800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.013 [2024-12-09 16:00:25.126808] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.013 [2024-12-09 16:00:25.139146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.013 [2024-12-09 16:00:25.139536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-12-09 16:00:25.139555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.013 [2024-12-09 16:00:25.139563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.013 [2024-12-09 16:00:25.139738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.013 [2024-12-09 16:00:25.139914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.013 [2024-12-09 16:00:25.139924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.013 [2024-12-09 16:00:25.139932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.013 [2024-12-09 16:00:25.139939] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.013 [2024-12-09 16:00:25.152075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.013 [2024-12-09 16:00:25.152402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-12-09 16:00:25.152420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.013 [2024-12-09 16:00:25.152428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.013 [2024-12-09 16:00:25.152590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.013 [2024-12-09 16:00:25.152750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.013 [2024-12-09 16:00:25.152760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.013 [2024-12-09 16:00:25.152766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.013 [2024-12-09 16:00:25.152772] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.013 [2024-12-09 16:00:25.164973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.013 [2024-12-09 16:00:25.165330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-12-09 16:00:25.165368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.013 [2024-12-09 16:00:25.165377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.013 [2024-12-09 16:00:25.165551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.013 [2024-12-09 16:00:25.165725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.013 [2024-12-09 16:00:25.165734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.013 [2024-12-09 16:00:25.165741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.013 [2024-12-09 16:00:25.165747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.013 [2024-12-09 16:00:25.178033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.013 [2024-12-09 16:00:25.178375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.013 [2024-12-09 16:00:25.178393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.013 [2024-12-09 16:00:25.178401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.013 [2024-12-09 16:00:25.178569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.013 [2024-12-09 16:00:25.178738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.013 [2024-12-09 16:00:25.178748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.014 [2024-12-09 16:00:25.178755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.014 [2024-12-09 16:00:25.178762] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.014 [2024-12-09 16:00:25.190815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.014 [2024-12-09 16:00:25.191136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-12-09 16:00:25.191153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.014 [2024-12-09 16:00:25.191159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.014 [2024-12-09 16:00:25.191346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.014 [2024-12-09 16:00:25.191516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.014 [2024-12-09 16:00:25.191526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.014 [2024-12-09 16:00:25.191533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.014 [2024-12-09 16:00:25.191540] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.014 [2024-12-09 16:00:25.203630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.014 [2024-12-09 16:00:25.203959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-12-09 16:00:25.203976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.014 [2024-12-09 16:00:25.203984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.014 [2024-12-09 16:00:25.204157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.014 [2024-12-09 16:00:25.204335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.014 [2024-12-09 16:00:25.204345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.014 [2024-12-09 16:00:25.204351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.014 [2024-12-09 16:00:25.204360] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.014 [2024-12-09 16:00:25.216551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.014 [2024-12-09 16:00:25.216982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-12-09 16:00:25.217018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.014 [2024-12-09 16:00:25.217044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.014 [2024-12-09 16:00:25.217630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.014 [2024-12-09 16:00:25.218024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.014 [2024-12-09 16:00:25.218043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.014 [2024-12-09 16:00:25.218057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.014 [2024-12-09 16:00:25.218071] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.014 [2024-12-09 16:00:25.231506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.014 [2024-12-09 16:00:25.232018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.014 [2024-12-09 16:00:25.232041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.014 [2024-12-09 16:00:25.232052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.014 [2024-12-09 16:00:25.232315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.014 [2024-12-09 16:00:25.232575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.014 [2024-12-09 16:00:25.232588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.014 [2024-12-09 16:00:25.232599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.014 [2024-12-09 16:00:25.232609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.272 [2024-12-09 16:00:25.244513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.272 [2024-12-09 16:00:25.244950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-12-09 16:00:25.244968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.272 [2024-12-09 16:00:25.244977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.272 [2024-12-09 16:00:25.245150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.272 [2024-12-09 16:00:25.245333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.272 [2024-12-09 16:00:25.245344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.272 [2024-12-09 16:00:25.245355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.272 [2024-12-09 16:00:25.245363] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.272 [2024-12-09 16:00:25.257423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.272 [2024-12-09 16:00:25.257703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-12-09 16:00:25.257720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.272 [2024-12-09 16:00:25.257728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.272 [2024-12-09 16:00:25.257897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.272 [2024-12-09 16:00:25.258067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.272 [2024-12-09 16:00:25.258077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.272 [2024-12-09 16:00:25.258084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.272 [2024-12-09 16:00:25.258090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.272 [2024-12-09 16:00:25.270288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.272 [2024-12-09 16:00:25.270654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-12-09 16:00:25.270671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.272 [2024-12-09 16:00:25.270679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.272 [2024-12-09 16:00:25.270839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.272 [2024-12-09 16:00:25.271000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.272 [2024-12-09 16:00:25.271009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.272 [2024-12-09 16:00:25.271015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.272 [2024-12-09 16:00:25.271021] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.272 [2024-12-09 16:00:25.283092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.272 [2024-12-09 16:00:25.283500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-12-09 16:00:25.283519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.272 [2024-12-09 16:00:25.283526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.272 [2024-12-09 16:00:25.283694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.272 [2024-12-09 16:00:25.283863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.272 [2024-12-09 16:00:25.283873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.272 [2024-12-09 16:00:25.283880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.272 [2024-12-09 16:00:25.283886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.272 [2024-12-09 16:00:25.295961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.272 [2024-12-09 16:00:25.296378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-12-09 16:00:25.296425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.272 [2024-12-09 16:00:25.296448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.272 [2024-12-09 16:00:25.297007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.272 [2024-12-09 16:00:25.297178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.272 [2024-12-09 16:00:25.297187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.272 [2024-12-09 16:00:25.297193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.272 [2024-12-09 16:00:25.297200] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.272 [2024-12-09 16:00:25.308803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.272 [2024-12-09 16:00:25.309192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-12-09 16:00:25.309209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.272 [2024-12-09 16:00:25.309224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.272 [2024-12-09 16:00:25.309408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.272 [2024-12-09 16:00:25.309578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.272 [2024-12-09 16:00:25.309587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.272 [2024-12-09 16:00:25.309594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.272 [2024-12-09 16:00:25.309601] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.272 [2024-12-09 16:00:25.321835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.272 [2024-12-09 16:00:25.322247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-12-09 16:00:25.322292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.272 [2024-12-09 16:00:25.322316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.272 [2024-12-09 16:00:25.322900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.272 [2024-12-09 16:00:25.323367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.272 [2024-12-09 16:00:25.323377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.272 [2024-12-09 16:00:25.323384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.272 [2024-12-09 16:00:25.323390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.272 [2024-12-09 16:00:25.334750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.272 [2024-12-09 16:00:25.335077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-12-09 16:00:25.335099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.272 [2024-12-09 16:00:25.335108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.272 [2024-12-09 16:00:25.335283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.272 [2024-12-09 16:00:25.335452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.272 [2024-12-09 16:00:25.335473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.272 [2024-12-09 16:00:25.335480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.272 [2024-12-09 16:00:25.335487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.272 [2024-12-09 16:00:25.347613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.272 [2024-12-09 16:00:25.347946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-12-09 16:00:25.347963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.272 [2024-12-09 16:00:25.347970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.272 [2024-12-09 16:00:25.348139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.272 [2024-12-09 16:00:25.348315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.272 [2024-12-09 16:00:25.348326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.272 [2024-12-09 16:00:25.348333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.272 [2024-12-09 16:00:25.348339] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.272 [2024-12-09 16:00:25.360558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.272 [2024-12-09 16:00:25.360948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.272 [2024-12-09 16:00:25.360992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.272 [2024-12-09 16:00:25.361016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.272 [2024-12-09 16:00:25.361479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.272 [2024-12-09 16:00:25.361642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.272 [2024-12-09 16:00:25.361651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.272 [2024-12-09 16:00:25.361657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.273 [2024-12-09 16:00:25.361663] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.273 [2024-12-09 16:00:25.373434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.273 [2024-12-09 16:00:25.373708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-12-09 16:00:25.373724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.273 [2024-12-09 16:00:25.373732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.273 [2024-12-09 16:00:25.373891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.273 [2024-12-09 16:00:25.374055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.273 [2024-12-09 16:00:25.374065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.273 [2024-12-09 16:00:25.374071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.273 [2024-12-09 16:00:25.374077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.273 [2024-12-09 16:00:25.386270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.273 [2024-12-09 16:00:25.386537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-12-09 16:00:25.386554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.273 [2024-12-09 16:00:25.386561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.273 [2024-12-09 16:00:25.386721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.273 [2024-12-09 16:00:25.386882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.273 [2024-12-09 16:00:25.386891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.273 [2024-12-09 16:00:25.386898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.273 [2024-12-09 16:00:25.386904] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.273 [2024-12-09 16:00:25.399179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.273 [2024-12-09 16:00:25.399564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-12-09 16:00:25.399607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.273 [2024-12-09 16:00:25.399631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.273 [2024-12-09 16:00:25.400214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.273 [2024-12-09 16:00:25.400433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.273 [2024-12-09 16:00:25.400443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.273 [2024-12-09 16:00:25.400450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.273 [2024-12-09 16:00:25.400457] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.273 [2024-12-09 16:00:25.412269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.273 [2024-12-09 16:00:25.412556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-12-09 16:00:25.412575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.273 [2024-12-09 16:00:25.412582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.273 [2024-12-09 16:00:25.412752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.273 [2024-12-09 16:00:25.412921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.273 [2024-12-09 16:00:25.412931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.273 [2024-12-09 16:00:25.412943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.273 [2024-12-09 16:00:25.412950] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.273 [2024-12-09 16:00:25.425175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.273 [2024-12-09 16:00:25.425467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-12-09 16:00:25.425485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.273 [2024-12-09 16:00:25.425493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.273 [2024-12-09 16:00:25.425662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.273 [2024-12-09 16:00:25.425832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.273 [2024-12-09 16:00:25.425842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.273 [2024-12-09 16:00:25.425848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.273 [2024-12-09 16:00:25.425854] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.273 [2024-12-09 16:00:25.438263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.273 [2024-12-09 16:00:25.438591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-12-09 16:00:25.438608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.273 [2024-12-09 16:00:25.438616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.273 [2024-12-09 16:00:25.438785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.273 [2024-12-09 16:00:25.438955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.273 [2024-12-09 16:00:25.438964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.273 [2024-12-09 16:00:25.438970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.273 [2024-12-09 16:00:25.438977] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.273 [2024-12-09 16:00:25.451306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.273 [2024-12-09 16:00:25.451691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-12-09 16:00:25.451709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.273 [2024-12-09 16:00:25.451717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.273 [2024-12-09 16:00:25.451891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.273 [2024-12-09 16:00:25.452067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.273 [2024-12-09 16:00:25.452077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.273 [2024-12-09 16:00:25.452083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.273 [2024-12-09 16:00:25.452090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.273 [2024-12-09 16:00:25.464337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.273 [2024-12-09 16:00:25.464666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-12-09 16:00:25.464684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.273 [2024-12-09 16:00:25.464692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.273 [2024-12-09 16:00:25.464865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.273 [2024-12-09 16:00:25.465039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.273 [2024-12-09 16:00:25.465049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.273 [2024-12-09 16:00:25.465056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.273 [2024-12-09 16:00:25.465063] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.273 [2024-12-09 16:00:25.477413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.273 [2024-12-09 16:00:25.477841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-12-09 16:00:25.477858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.273 [2024-12-09 16:00:25.477866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.273 [2024-12-09 16:00:25.478039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.273 [2024-12-09 16:00:25.478215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.273 [2024-12-09 16:00:25.478231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.273 [2024-12-09 16:00:25.478238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.273 [2024-12-09 16:00:25.478262] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.273 [2024-12-09 16:00:25.490439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.273 [2024-12-09 16:00:25.490864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.273 [2024-12-09 16:00:25.490882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.273 [2024-12-09 16:00:25.490889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.273 [2024-12-09 16:00:25.491063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.273 [2024-12-09 16:00:25.491247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.273 [2024-12-09 16:00:25.491257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.273 [2024-12-09 16:00:25.491264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.273 [2024-12-09 16:00:25.491271] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.533 [2024-12-09 16:00:25.503708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.533 [2024-12-09 16:00:25.504133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.533 [2024-12-09 16:00:25.504152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.533 [2024-12-09 16:00:25.504163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.533 [2024-12-09 16:00:25.504357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.533 [2024-12-09 16:00:25.504543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.533 [2024-12-09 16:00:25.504554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.533 [2024-12-09 16:00:25.504561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.533 [2024-12-09 16:00:25.504568] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.533 [2024-12-09 16:00:25.516684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.533 [2024-12-09 16:00:25.517115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.533 [2024-12-09 16:00:25.517133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.533 [2024-12-09 16:00:25.517141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.533 [2024-12-09 16:00:25.517323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.533 [2024-12-09 16:00:25.517500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.533 [2024-12-09 16:00:25.517510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.533 [2024-12-09 16:00:25.517517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.533 [2024-12-09 16:00:25.517523] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.533 [2024-12-09 16:00:25.529723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.533 [2024-12-09 16:00:25.530157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.533 [2024-12-09 16:00:25.530175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.533 [2024-12-09 16:00:25.530182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.533 [2024-12-09 16:00:25.530379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.533 [2024-12-09 16:00:25.530560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.533 [2024-12-09 16:00:25.530570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.533 [2024-12-09 16:00:25.530577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.533 [2024-12-09 16:00:25.530583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.533 [2024-12-09 16:00:25.542548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.533 [2024-12-09 16:00:25.542965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.533 [2024-12-09 16:00:25.543011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.533 [2024-12-09 16:00:25.543034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.533 [2024-12-09 16:00:25.543582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.533 [2024-12-09 16:00:25.543748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.533 [2024-12-09 16:00:25.543758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.533 [2024-12-09 16:00:25.543764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.533 [2024-12-09 16:00:25.543770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.533 [2024-12-09 16:00:25.555326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.533 [2024-12-09 16:00:25.555716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.533 [2024-12-09 16:00:25.555732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.533 [2024-12-09 16:00:25.555740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.533 [2024-12-09 16:00:25.555899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.533 [2024-12-09 16:00:25.556060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.533 [2024-12-09 16:00:25.556070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.533 [2024-12-09 16:00:25.556077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.533 [2024-12-09 16:00:25.556083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.533 [2024-12-09 16:00:25.568191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.533 [2024-12-09 16:00:25.568611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.533 [2024-12-09 16:00:25.568669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.533 [2024-12-09 16:00:25.568694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.533 [2024-12-09 16:00:25.569291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.533 [2024-12-09 16:00:25.569874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.533 [2024-12-09 16:00:25.569884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.533 [2024-12-09 16:00:25.569891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.533 [2024-12-09 16:00:25.569897] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.533 [2024-12-09 16:00:25.581030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.533 [2024-12-09 16:00:25.581443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.533 [2024-12-09 16:00:25.581488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.533 [2024-12-09 16:00:25.581512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.533 [2024-12-09 16:00:25.582032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.533 [2024-12-09 16:00:25.582193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.533 [2024-12-09 16:00:25.582202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.533 [2024-12-09 16:00:25.582211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.533 [2024-12-09 16:00:25.582226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.533 [2024-12-09 16:00:25.593861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.533 [2024-12-09 16:00:25.594188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.533 [2024-12-09 16:00:25.594206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.533 [2024-12-09 16:00:25.594214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.533 [2024-12-09 16:00:25.594390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.533 [2024-12-09 16:00:25.594559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.533 [2024-12-09 16:00:25.594569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.533 [2024-12-09 16:00:25.594575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.533 [2024-12-09 16:00:25.594582] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.533 [2024-12-09 16:00:25.606822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.533 [2024-12-09 16:00:25.607263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.533 [2024-12-09 16:00:25.607310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.533 [2024-12-09 16:00:25.607334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.533 [2024-12-09 16:00:25.607918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.533 [2024-12-09 16:00:25.608324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.533 [2024-12-09 16:00:25.608334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.533 [2024-12-09 16:00:25.608340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.533 [2024-12-09 16:00:25.608347] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.533 [2024-12-09 16:00:25.619600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.533 [2024-12-09 16:00:25.620009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.533 [2024-12-09 16:00:25.620053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.533 [2024-12-09 16:00:25.620076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.533 [2024-12-09 16:00:25.620673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.533 [2024-12-09 16:00:25.621121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.534 [2024-12-09 16:00:25.621131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.534 [2024-12-09 16:00:25.621137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.534 [2024-12-09 16:00:25.621144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.534 [2024-12-09 16:00:25.632364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.534 [2024-12-09 16:00:25.632689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.534 [2024-12-09 16:00:25.632706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.534 [2024-12-09 16:00:25.632714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.534 [2024-12-09 16:00:25.632883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.534 [2024-12-09 16:00:25.633052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.534 [2024-12-09 16:00:25.633062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.534 [2024-12-09 16:00:25.633068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.534 [2024-12-09 16:00:25.633075] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.534 [2024-12-09 16:00:25.645278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.534 [2024-12-09 16:00:25.645637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.534 [2024-12-09 16:00:25.645682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.534 [2024-12-09 16:00:25.645707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.534 [2024-12-09 16:00:25.646223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.534 [2024-12-09 16:00:25.646395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.534 [2024-12-09 16:00:25.646404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.534 [2024-12-09 16:00:25.646411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.534 [2024-12-09 16:00:25.646418] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.534 [2024-12-09 16:00:25.658226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.534 [2024-12-09 16:00:25.658543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.534 [2024-12-09 16:00:25.658589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.534 [2024-12-09 16:00:25.658613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.534 [2024-12-09 16:00:25.659194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.534 [2024-12-09 16:00:25.659426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.534 [2024-12-09 16:00:25.659436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.534 [2024-12-09 16:00:25.659442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.534 [2024-12-09 16:00:25.659449] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.534 [2024-12-09 16:00:25.671139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.534 [2024-12-09 16:00:25.671518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.534 [2024-12-09 16:00:25.671535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.534 [2024-12-09 16:00:25.671546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.534 [2024-12-09 16:00:25.671707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.534 [2024-12-09 16:00:25.671891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.534 [2024-12-09 16:00:25.671900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.534 [2024-12-09 16:00:25.671907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.534 [2024-12-09 16:00:25.671913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.534 [2024-12-09 16:00:25.684086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.534 [2024-12-09 16:00:25.684448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.534 [2024-12-09 16:00:25.684466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.534 [2024-12-09 16:00:25.684473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.534 [2024-12-09 16:00:25.684641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.534 [2024-12-09 16:00:25.684977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.534 [2024-12-09 16:00:25.684988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.534 [2024-12-09 16:00:25.684996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.534 [2024-12-09 16:00:25.685002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.534 [2024-12-09 16:00:25.697060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.534 [2024-12-09 16:00:25.697382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.534 [2024-12-09 16:00:25.697400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.534 [2024-12-09 16:00:25.697409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.534 [2024-12-09 16:00:25.697579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.534 [2024-12-09 16:00:25.697748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.534 [2024-12-09 16:00:25.697758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.534 [2024-12-09 16:00:25.697765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.534 [2024-12-09 16:00:25.697771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.534 7665.75 IOPS, 29.94 MiB/s [2024-12-09T15:00:25.762Z] [2024-12-09 16:00:25.711027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.534 [2024-12-09 16:00:25.711455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.534 [2024-12-09 16:00:25.711474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.534 [2024-12-09 16:00:25.711482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.534 [2024-12-09 16:00:25.711651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.534 [2024-12-09 16:00:25.711826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.534 [2024-12-09 16:00:25.711836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.534 [2024-12-09 16:00:25.711843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.534 [2024-12-09 16:00:25.711849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.534 [2024-12-09 16:00:25.723764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.534 [2024-12-09 16:00:25.724185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.534 [2024-12-09 16:00:25.724253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.534 [2024-12-09 16:00:25.724280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.534 [2024-12-09 16:00:25.724864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.534 [2024-12-09 16:00:25.725316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.534 [2024-12-09 16:00:25.725327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.534 [2024-12-09 16:00:25.725333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.534 [2024-12-09 16:00:25.725340] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.534 [2024-12-09 16:00:25.736574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.534 [2024-12-09 16:00:25.737003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.534 [2024-12-09 16:00:25.737047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.534 [2024-12-09 16:00:25.737071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.534 [2024-12-09 16:00:25.737556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.534 [2024-12-09 16:00:25.737728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.534 [2024-12-09 16:00:25.737738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.534 [2024-12-09 16:00:25.737745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.534 [2024-12-09 16:00:25.737752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.534 [2024-12-09 16:00:25.749428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.534 [2024-12-09 16:00:25.749849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.534 [2024-12-09 16:00:25.749894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.534 [2024-12-09 16:00:25.749918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.534 [2024-12-09 16:00:25.750385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.534 [2024-12-09 16:00:25.750556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.534 [2024-12-09 16:00:25.750567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.535 [2024-12-09 16:00:25.750577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.535 [2024-12-09 16:00:25.750584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.795 [2024-12-09 16:00:25.762417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.795 [2024-12-09 16:00:25.762833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.795 [2024-12-09 16:00:25.762850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.795 [2024-12-09 16:00:25.762858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.795 [2024-12-09 16:00:25.763018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.795 [2024-12-09 16:00:25.763179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.795 [2024-12-09 16:00:25.763188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.795 [2024-12-09 16:00:25.763194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.795 [2024-12-09 16:00:25.763201] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.795 [2024-12-09 16:00:25.775245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.795 [2024-12-09 16:00:25.775659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.795 [2024-12-09 16:00:25.775697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.795 [2024-12-09 16:00:25.775724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.795 [2024-12-09 16:00:25.776321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.795 [2024-12-09 16:00:25.776492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.795 [2024-12-09 16:00:25.776502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.795 [2024-12-09 16:00:25.776508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.795 [2024-12-09 16:00:25.776515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.795 [2024-12-09 16:00:25.788113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.795 [2024-12-09 16:00:25.788420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.795 [2024-12-09 16:00:25.788438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.795 [2024-12-09 16:00:25.788446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.795 [2024-12-09 16:00:25.788606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.795 [2024-12-09 16:00:25.788766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.795 [2024-12-09 16:00:25.788776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.795 [2024-12-09 16:00:25.788782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.795 [2024-12-09 16:00:25.788788] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.795 [2024-12-09 16:00:25.800943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.795 [2024-12-09 16:00:25.801342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.795 [2024-12-09 16:00:25.801360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.795 [2024-12-09 16:00:25.801368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.795 [2024-12-09 16:00:25.801528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.795 [2024-12-09 16:00:25.801689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.795 [2024-12-09 16:00:25.801698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.795 [2024-12-09 16:00:25.801704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.795 [2024-12-09 16:00:25.801710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.795 [2024-12-09 16:00:25.813790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.795 [2024-12-09 16:00:25.814230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.795 [2024-12-09 16:00:25.814275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.795 [2024-12-09 16:00:25.814300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.795 [2024-12-09 16:00:25.814729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.795 [2024-12-09 16:00:25.814891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.795 [2024-12-09 16:00:25.814900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.795 [2024-12-09 16:00:25.814906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.795 [2024-12-09 16:00:25.814913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.795 [2024-12-09 16:00:25.826656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.795 [2024-12-09 16:00:25.827006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.795 [2024-12-09 16:00:25.827052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.795 [2024-12-09 16:00:25.827077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.795 [2024-12-09 16:00:25.827566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.795 [2024-12-09 16:00:25.827737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.795 [2024-12-09 16:00:25.827746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.795 [2024-12-09 16:00:25.827754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.795 [2024-12-09 16:00:25.827761] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.795 [2024-12-09 16:00:25.839413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.795 [2024-12-09 16:00:25.839835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.795 [2024-12-09 16:00:25.839852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.795 [2024-12-09 16:00:25.839862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.795 [2024-12-09 16:00:25.840021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.795 [2024-12-09 16:00:25.840182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.795 [2024-12-09 16:00:25.840191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.795 [2024-12-09 16:00:25.840198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.796 [2024-12-09 16:00:25.840204] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.796 [2024-12-09 16:00:25.852286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.796 [2024-12-09 16:00:25.852703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.796 [2024-12-09 16:00:25.852752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.796 [2024-12-09 16:00:25.852776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.796 [2024-12-09 16:00:25.853375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.796 [2024-12-09 16:00:25.853882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.796 [2024-12-09 16:00:25.853892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.796 [2024-12-09 16:00:25.853899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.796 [2024-12-09 16:00:25.853905] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.796 [2024-12-09 16:00:25.865061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.796 [2024-12-09 16:00:25.865416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.796 [2024-12-09 16:00:25.865462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.796 [2024-12-09 16:00:25.865485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.796 [2024-12-09 16:00:25.866071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.796 [2024-12-09 16:00:25.866671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.796 [2024-12-09 16:00:25.866695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.796 [2024-12-09 16:00:25.866701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.796 [2024-12-09 16:00:25.866709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.796 [2024-12-09 16:00:25.877935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.796 [2024-12-09 16:00:25.878349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.796 [2024-12-09 16:00:25.878367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.796 [2024-12-09 16:00:25.878376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.796 [2024-12-09 16:00:25.878537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.796 [2024-12-09 16:00:25.878700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.796 [2024-12-09 16:00:25.878710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.796 [2024-12-09 16:00:25.878717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.796 [2024-12-09 16:00:25.878723] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.796 [2024-12-09 16:00:25.890822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.796 [2024-12-09 16:00:25.891241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.796 [2024-12-09 16:00:25.891286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.796 [2024-12-09 16:00:25.891311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.796 [2024-12-09 16:00:25.891757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.796 [2024-12-09 16:00:25.891927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.796 [2024-12-09 16:00:25.891937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.796 [2024-12-09 16:00:25.891944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.796 [2024-12-09 16:00:25.891950] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.796 [2024-12-09 16:00:25.903648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.796 [2024-12-09 16:00:25.903987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.796 [2024-12-09 16:00:25.904004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.796 [2024-12-09 16:00:25.904012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.796 [2024-12-09 16:00:25.904172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.796 [2024-12-09 16:00:25.904367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.796 [2024-12-09 16:00:25.904378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.796 [2024-12-09 16:00:25.904385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.796 [2024-12-09 16:00:25.904392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.796 [2024-12-09 16:00:25.916526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.796 [2024-12-09 16:00:25.916940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.796 [2024-12-09 16:00:25.916957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.796 [2024-12-09 16:00:25.916964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.796 [2024-12-09 16:00:25.917125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.796 [2024-12-09 16:00:25.917310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.796 [2024-12-09 16:00:25.917321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.796 [2024-12-09 16:00:25.917331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.796 [2024-12-09 16:00:25.917338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.796 [2024-12-09 16:00:25.929331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.796 [2024-12-09 16:00:25.929697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.796 [2024-12-09 16:00:25.929714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.796 [2024-12-09 16:00:25.929721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.796 [2024-12-09 16:00:25.929891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.796 [2024-12-09 16:00:25.930060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.796 [2024-12-09 16:00:25.930069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.796 [2024-12-09 16:00:25.930075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.796 [2024-12-09 16:00:25.930082] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.796 [2024-12-09 16:00:25.942201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.796 [2024-12-09 16:00:25.942563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.796 [2024-12-09 16:00:25.942609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.796 [2024-12-09 16:00:25.942633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.796 [2024-12-09 16:00:25.943226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.796 [2024-12-09 16:00:25.943816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.796 [2024-12-09 16:00:25.943841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.796 [2024-12-09 16:00:25.943863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.796 [2024-12-09 16:00:25.943883] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.796 [2024-12-09 16:00:25.955227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.796 [2024-12-09 16:00:25.955559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.796 [2024-12-09 16:00:25.955576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.796 [2024-12-09 16:00:25.955584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.796 [2024-12-09 16:00:25.955754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.796 [2024-12-09 16:00:25.955924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.796 [2024-12-09 16:00:25.955933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.796 [2024-12-09 16:00:25.955940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.796 [2024-12-09 16:00:25.955946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.796 [2024-12-09 16:00:25.968176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.796 [2024-12-09 16:00:25.968624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.796 [2024-12-09 16:00:25.968669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.796 [2024-12-09 16:00:25.968693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.797 [2024-12-09 16:00:25.969290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.797 [2024-12-09 16:00:25.969878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.797 [2024-12-09 16:00:25.969889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.797 [2024-12-09 16:00:25.969895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.797 [2024-12-09 16:00:25.969902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.797 [2024-12-09 16:00:25.981053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.797 [2024-12-09 16:00:25.981466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.797 [2024-12-09 16:00:25.981483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.797 [2024-12-09 16:00:25.981491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.797 [2024-12-09 16:00:25.981650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.797 [2024-12-09 16:00:25.981810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.797 [2024-12-09 16:00:25.981820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.797 [2024-12-09 16:00:25.981827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.797 [2024-12-09 16:00:25.981833] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.797 [2024-12-09 16:00:25.993882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.797 [2024-12-09 16:00:25.994234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.797 [2024-12-09 16:00:25.994251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.797 [2024-12-09 16:00:25.994259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.797 [2024-12-09 16:00:25.994428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.797 [2024-12-09 16:00:25.994597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.797 [2024-12-09 16:00:25.994606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.797 [2024-12-09 16:00:25.994613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.797 [2024-12-09 16:00:25.994619] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.797 [2024-12-09 16:00:26.006766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.797 [2024-12-09 16:00:26.007188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.797 [2024-12-09 16:00:26.007239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.797 [2024-12-09 16:00:26.007273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.797 [2024-12-09 16:00:26.007787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.797 [2024-12-09 16:00:26.007948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.797 [2024-12-09 16:00:26.007957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.797 [2024-12-09 16:00:26.007963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.797 [2024-12-09 16:00:26.007971] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.797 [2024-12-09 16:00:26.019808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.797 [2024-12-09 16:00:26.020243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.797 [2024-12-09 16:00:26.020261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:30.797 [2024-12-09 16:00:26.020269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:30.797 [2024-12-09 16:00:26.020452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:30.797 [2024-12-09 16:00:26.020622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.797 [2024-12-09 16:00:26.020632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.797 [2024-12-09 16:00:26.020638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.797 [2024-12-09 16:00:26.020644] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.057 [2024-12-09 16:00:26.032548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.057 [2024-12-09 16:00:26.032891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.057 [2024-12-09 16:00:26.032909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.057 [2024-12-09 16:00:26.032916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.057 [2024-12-09 16:00:26.033076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.057 [2024-12-09 16:00:26.033257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.057 [2024-12-09 16:00:26.033267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.057 [2024-12-09 16:00:26.033274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.057 [2024-12-09 16:00:26.033281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.057 [2024-12-09 16:00:26.045427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.057 [2024-12-09 16:00:26.045846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.057 [2024-12-09 16:00:26.045863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.057 [2024-12-09 16:00:26.045871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.057 [2024-12-09 16:00:26.046031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.057 [2024-12-09 16:00:26.046192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.057 [2024-12-09 16:00:26.046207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.057 [2024-12-09 16:00:26.046213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.057 [2024-12-09 16:00:26.046225] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.057 [2024-12-09 16:00:26.058242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.057 [2024-12-09 16:00:26.058596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.057 [2024-12-09 16:00:26.058614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.057 [2024-12-09 16:00:26.058621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.057 [2024-12-09 16:00:26.058781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.057 [2024-12-09 16:00:26.058942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.057 [2024-12-09 16:00:26.058951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.057 [2024-12-09 16:00:26.058958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.057 [2024-12-09 16:00:26.058964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.057 [2024-12-09 16:00:26.071064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.057 [2024-12-09 16:00:26.071420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.057 [2024-12-09 16:00:26.071438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.057 [2024-12-09 16:00:26.071446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.057 [2024-12-09 16:00:26.071606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.057 [2024-12-09 16:00:26.071767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.057 [2024-12-09 16:00:26.071776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.057 [2024-12-09 16:00:26.071783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.057 [2024-12-09 16:00:26.071788] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.057 [2024-12-09 16:00:26.083904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.057 [2024-12-09 16:00:26.084310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.057 [2024-12-09 16:00:26.084343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.057 [2024-12-09 16:00:26.084367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.057 [2024-12-09 16:00:26.084949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.057 [2024-12-09 16:00:26.085111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.057 [2024-12-09 16:00:26.085120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.057 [2024-12-09 16:00:26.085126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.058 [2024-12-09 16:00:26.085136] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.058 [2024-12-09 16:00:26.096764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.058 [2024-12-09 16:00:26.097109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.058 [2024-12-09 16:00:26.097126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.058 [2024-12-09 16:00:26.097133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.058 [2024-12-09 16:00:26.097316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.058 [2024-12-09 16:00:26.097487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.058 [2024-12-09 16:00:26.097497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.058 [2024-12-09 16:00:26.097504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.058 [2024-12-09 16:00:26.097510] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.058 [2024-12-09 16:00:26.109569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.058 [2024-12-09 16:00:26.109995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.058 [2024-12-09 16:00:26.110041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.058 [2024-12-09 16:00:26.110064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.058 [2024-12-09 16:00:26.110665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.058 [2024-12-09 16:00:26.111242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.058 [2024-12-09 16:00:26.111252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.058 [2024-12-09 16:00:26.111258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.058 [2024-12-09 16:00:26.111265] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.058 [2024-12-09 16:00:26.122415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.058 [2024-12-09 16:00:26.122818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.058 [2024-12-09 16:00:26.122863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.058 [2024-12-09 16:00:26.122887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.058 [2024-12-09 16:00:26.123347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.058 [2024-12-09 16:00:26.123509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.058 [2024-12-09 16:00:26.123519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.058 [2024-12-09 16:00:26.123526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.058 [2024-12-09 16:00:26.123532] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.058 [2024-12-09 16:00:26.135213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.058 [2024-12-09 16:00:26.135625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.058 [2024-12-09 16:00:26.135641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.058 [2024-12-09 16:00:26.135649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.058 [2024-12-09 16:00:26.135809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.058 [2024-12-09 16:00:26.135970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.058 [2024-12-09 16:00:26.135979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.058 [2024-12-09 16:00:26.135986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.058 [2024-12-09 16:00:26.135992] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.058 [2024-12-09 16:00:26.148011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.058 [2024-12-09 16:00:26.148423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.058 [2024-12-09 16:00:26.148441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.058 [2024-12-09 16:00:26.148450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.058 [2024-12-09 16:00:26.148611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.058 [2024-12-09 16:00:26.148772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.058 [2024-12-09 16:00:26.148781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.058 [2024-12-09 16:00:26.148787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.058 [2024-12-09 16:00:26.148793] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.058 [2024-12-09 16:00:26.160864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.058 [2024-12-09 16:00:26.161273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.058 [2024-12-09 16:00:26.161319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.058 [2024-12-09 16:00:26.161343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.058 [2024-12-09 16:00:26.161928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.058 [2024-12-09 16:00:26.162177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.058 [2024-12-09 16:00:26.162186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.058 [2024-12-09 16:00:26.162193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.058 [2024-12-09 16:00:26.162199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.058 [2024-12-09 16:00:26.173627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.058 [2024-12-09 16:00:26.173934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.058 [2024-12-09 16:00:26.173951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.058 [2024-12-09 16:00:26.173958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.058 [2024-12-09 16:00:26.174122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.058 [2024-12-09 16:00:26.174306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.058 [2024-12-09 16:00:26.174317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.058 [2024-12-09 16:00:26.174323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.058 [2024-12-09 16:00:26.174330] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.058 [2024-12-09 16:00:26.186473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.058 [2024-12-09 16:00:26.186884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.058 [2024-12-09 16:00:26.186901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.058 [2024-12-09 16:00:26.186909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.058 [2024-12-09 16:00:26.187069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.058 [2024-12-09 16:00:26.187234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.058 [2024-12-09 16:00:26.187260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.058 [2024-12-09 16:00:26.187268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.058 [2024-12-09 16:00:26.187276] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.058 [2024-12-09 16:00:26.199322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.058 [2024-12-09 16:00:26.199652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.058 [2024-12-09 16:00:26.199669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.058 [2024-12-09 16:00:26.199677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.058 [2024-12-09 16:00:26.199846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.058 [2024-12-09 16:00:26.200015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.058 [2024-12-09 16:00:26.200025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.058 [2024-12-09 16:00:26.200031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.058 [2024-12-09 16:00:26.200038] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.058 [2024-12-09 16:00:26.212332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.058 [2024-12-09 16:00:26.212690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.058 [2024-12-09 16:00:26.212709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.058 [2024-12-09 16:00:26.212716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.058 [2024-12-09 16:00:26.212886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.058 [2024-12-09 16:00:26.213056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.058 [2024-12-09 16:00:26.213085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.058 [2024-12-09 16:00:26.213092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.058 [2024-12-09 16:00:26.213099] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.058 [2024-12-09 16:00:26.225279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.059 [2024-12-09 16:00:26.225699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.059 [2024-12-09 16:00:26.225738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.059 [2024-12-09 16:00:26.225764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.059 [2024-12-09 16:00:26.226326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.059 [2024-12-09 16:00:26.226496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.059 [2024-12-09 16:00:26.226506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.059 [2024-12-09 16:00:26.226513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.059 [2024-12-09 16:00:26.226519] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.059 [2024-12-09 16:00:26.238020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.059 [2024-12-09 16:00:26.238417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.059 [2024-12-09 16:00:26.238434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.059 [2024-12-09 16:00:26.238442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.059 [2024-12-09 16:00:26.238602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.059 [2024-12-09 16:00:26.238762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.059 [2024-12-09 16:00:26.238771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.059 [2024-12-09 16:00:26.238777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.059 [2024-12-09 16:00:26.238783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.059 [2024-12-09 16:00:26.250798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.059 [2024-12-09 16:00:26.251102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.059 [2024-12-09 16:00:26.251119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.059 [2024-12-09 16:00:26.251127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.059 [2024-12-09 16:00:26.251311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.059 [2024-12-09 16:00:26.251481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.059 [2024-12-09 16:00:26.251491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.059 [2024-12-09 16:00:26.251497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.059 [2024-12-09 16:00:26.251507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.059 [2024-12-09 16:00:26.263580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.059 [2024-12-09 16:00:26.263988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.059 [2024-12-09 16:00:26.264033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.059 [2024-12-09 16:00:26.264056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.059 [2024-12-09 16:00:26.264579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.059 [2024-12-09 16:00:26.264750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.059 [2024-12-09 16:00:26.264759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.059 [2024-12-09 16:00:26.264765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.059 [2024-12-09 16:00:26.264772] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.059 [2024-12-09 16:00:26.276389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.059 [2024-12-09 16:00:26.276727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.059 [2024-12-09 16:00:26.276744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.059 [2024-12-09 16:00:26.276751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.059 [2024-12-09 16:00:26.276910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.059 [2024-12-09 16:00:26.277070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.059 [2024-12-09 16:00:26.277079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.059 [2024-12-09 16:00:26.277086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.059 [2024-12-09 16:00:26.277092] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.320 [2024-12-09 16:00:26.289407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.320 [2024-12-09 16:00:26.289840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-12-09 16:00:26.289859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.320 [2024-12-09 16:00:26.289867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.320 [2024-12-09 16:00:26.290042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.320 [2024-12-09 16:00:26.290223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.320 [2024-12-09 16:00:26.290234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.320 [2024-12-09 16:00:26.290240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.320 [2024-12-09 16:00:26.290248] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.320 [2024-12-09 16:00:26.302273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.320 [2024-12-09 16:00:26.302587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-12-09 16:00:26.302634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.320 [2024-12-09 16:00:26.302659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.320 [2024-12-09 16:00:26.303256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.320 [2024-12-09 16:00:26.303549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.320 [2024-12-09 16:00:26.303559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.320 [2024-12-09 16:00:26.303566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.320 [2024-12-09 16:00:26.303572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.320 [2024-12-09 16:00:26.315113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.320 [2024-12-09 16:00:26.315548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-12-09 16:00:26.315594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.320 [2024-12-09 16:00:26.315618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.320 [2024-12-09 16:00:26.316202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.320 [2024-12-09 16:00:26.316693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.320 [2024-12-09 16:00:26.316703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.320 [2024-12-09 16:00:26.316709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.320 [2024-12-09 16:00:26.316715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.320 [2024-12-09 16:00:26.327960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.320 [2024-12-09 16:00:26.328374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-12-09 16:00:26.328392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.320 [2024-12-09 16:00:26.328399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.320 [2024-12-09 16:00:26.328560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.320 [2024-12-09 16:00:26.328721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.320 [2024-12-09 16:00:26.328730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.320 [2024-12-09 16:00:26.328737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.320 [2024-12-09 16:00:26.328742] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.320 [2024-12-09 16:00:26.340813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.320 [2024-12-09 16:00:26.341202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-12-09 16:00:26.341224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.320 [2024-12-09 16:00:26.341232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.320 [2024-12-09 16:00:26.341395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.320 [2024-12-09 16:00:26.341555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.320 [2024-12-09 16:00:26.341564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.320 [2024-12-09 16:00:26.341570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.320 [2024-12-09 16:00:26.341576] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.320 [2024-12-09 16:00:26.353643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.320 [2024-12-09 16:00:26.353986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-12-09 16:00:26.354003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.320 [2024-12-09 16:00:26.354010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.320 [2024-12-09 16:00:26.354169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.320 [2024-12-09 16:00:26.354355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.320 [2024-12-09 16:00:26.354364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.320 [2024-12-09 16:00:26.354371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.320 [2024-12-09 16:00:26.354377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.320 [2024-12-09 16:00:26.366507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.320 [2024-12-09 16:00:26.366823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-12-09 16:00:26.366881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.320 [2024-12-09 16:00:26.366906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.320 [2024-12-09 16:00:26.367506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.320 [2024-12-09 16:00:26.367712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.320 [2024-12-09 16:00:26.367722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.320 [2024-12-09 16:00:26.367729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.320 [2024-12-09 16:00:26.367736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.320 [2024-12-09 16:00:26.379367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.320 [2024-12-09 16:00:26.379772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.320 [2024-12-09 16:00:26.379808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.320 [2024-12-09 16:00:26.379833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.320 [2024-12-09 16:00:26.380411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.320 [2024-12-09 16:00:26.380806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.320 [2024-12-09 16:00:26.380829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.320 [2024-12-09 16:00:26.380845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.320 [2024-12-09 16:00:26.380858] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.321 [2024-12-09 16:00:26.394408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.321 [2024-12-09 16:00:26.394910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-12-09 16:00:26.394933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.321 [2024-12-09 16:00:26.394944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.321 [2024-12-09 16:00:26.395199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.321 [2024-12-09 16:00:26.395463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.321 [2024-12-09 16:00:26.395476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.321 [2024-12-09 16:00:26.395486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.321 [2024-12-09 16:00:26.395495] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.321 [2024-12-09 16:00:26.407456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.321 [2024-12-09 16:00:26.407865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-12-09 16:00:26.407881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.321 [2024-12-09 16:00:26.407888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.321 [2024-12-09 16:00:26.408062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.321 [2024-12-09 16:00:26.408242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.321 [2024-12-09 16:00:26.408251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.321 [2024-12-09 16:00:26.408259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.321 [2024-12-09 16:00:26.408266] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.321 [2024-12-09 16:00:26.420232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.321 [2024-12-09 16:00:26.420575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-12-09 16:00:26.420592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.321 [2024-12-09 16:00:26.420599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.321 [2024-12-09 16:00:26.420758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.321 [2024-12-09 16:00:26.420919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.321 [2024-12-09 16:00:26.420928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.321 [2024-12-09 16:00:26.420935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.321 [2024-12-09 16:00:26.420944] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.321 [2024-12-09 16:00:26.433055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.321 [2024-12-09 16:00:26.433397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-12-09 16:00:26.433415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.321 [2024-12-09 16:00:26.433423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.321 [2024-12-09 16:00:26.433591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.321 [2024-12-09 16:00:26.433761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.321 [2024-12-09 16:00:26.433770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.321 [2024-12-09 16:00:26.433777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.321 [2024-12-09 16:00:26.433783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.321 [2024-12-09 16:00:26.445882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.321 [2024-12-09 16:00:26.446291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-12-09 16:00:26.446308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.321 [2024-12-09 16:00:26.446315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.321 [2024-12-09 16:00:26.446476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.321 [2024-12-09 16:00:26.446636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.321 [2024-12-09 16:00:26.446646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.321 [2024-12-09 16:00:26.446652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.321 [2024-12-09 16:00:26.446658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.321 [2024-12-09 16:00:26.458916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.321 [2024-12-09 16:00:26.459338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-12-09 16:00:26.459357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.321 [2024-12-09 16:00:26.459364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.321 [2024-12-09 16:00:26.459539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.321 [2024-12-09 16:00:26.459713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.321 [2024-12-09 16:00:26.459723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.321 [2024-12-09 16:00:26.459730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.321 [2024-12-09 16:00:26.459736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.321 [2024-12-09 16:00:26.471932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.321 [2024-12-09 16:00:26.472334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-12-09 16:00:26.472354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.321 [2024-12-09 16:00:26.472361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.321 [2024-12-09 16:00:26.472521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.321 [2024-12-09 16:00:26.472682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.321 [2024-12-09 16:00:26.472691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.321 [2024-12-09 16:00:26.472697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.321 [2024-12-09 16:00:26.472704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.321 [2024-12-09 16:00:26.484672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.321 [2024-12-09 16:00:26.485075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-12-09 16:00:26.485091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.321 [2024-12-09 16:00:26.485099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.321 [2024-12-09 16:00:26.485284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.321 [2024-12-09 16:00:26.485454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.321 [2024-12-09 16:00:26.485464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.321 [2024-12-09 16:00:26.485470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.321 [2024-12-09 16:00:26.485476] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.321 [2024-12-09 16:00:26.497489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.321 [2024-12-09 16:00:26.497907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-12-09 16:00:26.497924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.321 [2024-12-09 16:00:26.497932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.321 [2024-12-09 16:00:26.498093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.321 [2024-12-09 16:00:26.498275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.321 [2024-12-09 16:00:26.498285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.321 [2024-12-09 16:00:26.498292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.321 [2024-12-09 16:00:26.498299] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.321 [2024-12-09 16:00:26.510519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.321 [2024-12-09 16:00:26.510951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.321 [2024-12-09 16:00:26.510970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.321 [2024-12-09 16:00:26.510977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.321 [2024-12-09 16:00:26.511155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.321 [2024-12-09 16:00:26.511335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.321 [2024-12-09 16:00:26.511345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.321 [2024-12-09 16:00:26.511352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.321 [2024-12-09 16:00:26.511358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.321 [2024-12-09 16:00:26.523595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.321 [2024-12-09 16:00:26.524024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-12-09 16:00:26.524041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.322 [2024-12-09 16:00:26.524049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.322 [2024-12-09 16:00:26.524229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.322 [2024-12-09 16:00:26.524404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.322 [2024-12-09 16:00:26.524414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.322 [2024-12-09 16:00:26.524420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.322 [2024-12-09 16:00:26.524427] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.322 [2024-12-09 16:00:26.536677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.322 [2024-12-09 16:00:26.537106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.322 [2024-12-09 16:00:26.537124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.322 [2024-12-09 16:00:26.537132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.322 [2024-12-09 16:00:26.537311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.322 [2024-12-09 16:00:26.537486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.322 [2024-12-09 16:00:26.537496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.322 [2024-12-09 16:00:26.537503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.322 [2024-12-09 16:00:26.537510] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.581 [2024-12-09 16:00:26.549690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.581 [2024-12-09 16:00:26.550125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.582 [2024-12-09 16:00:26.550143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.582 [2024-12-09 16:00:26.550151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.582 [2024-12-09 16:00:26.550334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.582 [2024-12-09 16:00:26.550510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.582 [2024-12-09 16:00:26.550522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.582 [2024-12-09 16:00:26.550530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.582 [2024-12-09 16:00:26.550538] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.582 [2024-12-09 16:00:26.562932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.582 [2024-12-09 16:00:26.563373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.582 [2024-12-09 16:00:26.563392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.582 [2024-12-09 16:00:26.563401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.582 [2024-12-09 16:00:26.563590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.582 [2024-12-09 16:00:26.563766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.582 [2024-12-09 16:00:26.563775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.582 [2024-12-09 16:00:26.563782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.582 [2024-12-09 16:00:26.563789] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.582 [2024-12-09 16:00:26.576040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.582 [2024-12-09 16:00:26.576461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.582 [2024-12-09 16:00:26.576480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.582 [2024-12-09 16:00:26.576488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.582 [2024-12-09 16:00:26.576672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.582 [2024-12-09 16:00:26.576858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.582 [2024-12-09 16:00:26.576869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.582 [2024-12-09 16:00:26.576876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.582 [2024-12-09 16:00:26.576883] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.582 [2024-12-09 16:00:26.589364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.582 [2024-12-09 16:00:26.589803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.582 [2024-12-09 16:00:26.589822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.582 [2024-12-09 16:00:26.589830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.582 [2024-12-09 16:00:26.590014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.582 [2024-12-09 16:00:26.590202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.582 [2024-12-09 16:00:26.590213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.582 [2024-12-09 16:00:26.590228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.582 [2024-12-09 16:00:26.590235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.582 [2024-12-09 16:00:26.602607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.582 [2024-12-09 16:00:26.603052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.582 [2024-12-09 16:00:26.603072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.582 [2024-12-09 16:00:26.603080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.582 [2024-12-09 16:00:26.603271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.582 [2024-12-09 16:00:26.603458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.582 [2024-12-09 16:00:26.603469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.582 [2024-12-09 16:00:26.603476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.582 [2024-12-09 16:00:26.603483] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.582 [2024-12-09 16:00:26.615727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.582 [2024-12-09 16:00:26.616091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.582 [2024-12-09 16:00:26.616109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.582 [2024-12-09 16:00:26.616117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.582 [2024-12-09 16:00:26.616298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.582 [2024-12-09 16:00:26.616474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.582 [2024-12-09 16:00:26.616484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.582 [2024-12-09 16:00:26.616491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.582 [2024-12-09 16:00:26.616497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.582 [2024-12-09 16:00:26.628756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.582 [2024-12-09 16:00:26.629206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.582 [2024-12-09 16:00:26.629230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.582 [2024-12-09 16:00:26.629239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.582 [2024-12-09 16:00:26.629414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.582 [2024-12-09 16:00:26.629598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.582 [2024-12-09 16:00:26.629608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.582 [2024-12-09 16:00:26.629614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.582 [2024-12-09 16:00:26.629621] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.582 [2024-12-09 16:00:26.641638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.582 [2024-12-09 16:00:26.641980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.582 [2024-12-09 16:00:26.642000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.582 [2024-12-09 16:00:26.642008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.582 [2024-12-09 16:00:26.642168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.582 [2024-12-09 16:00:26.642335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.582 [2024-12-09 16:00:26.642346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.582 [2024-12-09 16:00:26.642352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.582 [2024-12-09 16:00:26.642358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.582 [2024-12-09 16:00:26.654491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.582 [2024-12-09 16:00:26.654855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.582 [2024-12-09 16:00:26.654872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.582 [2024-12-09 16:00:26.654879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.582 [2024-12-09 16:00:26.655039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.582 [2024-12-09 16:00:26.655198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.582 [2024-12-09 16:00:26.655208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.582 [2024-12-09 16:00:26.655214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.582 [2024-12-09 16:00:26.655227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.582 [2024-12-09 16:00:26.667528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.582 [2024-12-09 16:00:26.667845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.582 [2024-12-09 16:00:26.667862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.582 [2024-12-09 16:00:26.667869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.582 [2024-12-09 16:00:26.668029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.582 [2024-12-09 16:00:26.668189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.582 [2024-12-09 16:00:26.668198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.582 [2024-12-09 16:00:26.668204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.582 [2024-12-09 16:00:26.668210] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.582 [2024-12-09 16:00:26.680498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.582 [2024-12-09 16:00:26.680813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.583 [2024-12-09 16:00:26.680830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.583 [2024-12-09 16:00:26.680837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.583 [2024-12-09 16:00:26.681003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.583 [2024-12-09 16:00:26.681165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.583 [2024-12-09 16:00:26.681175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.583 [2024-12-09 16:00:26.681181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.583 [2024-12-09 16:00:26.681187] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.583 [2024-12-09 16:00:26.693606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.583 [2024-12-09 16:00:26.693978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.583 [2024-12-09 16:00:26.693997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.583 [2024-12-09 16:00:26.694005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.583 [2024-12-09 16:00:26.694180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.583 [2024-12-09 16:00:26.694364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.583 [2024-12-09 16:00:26.694375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.583 [2024-12-09 16:00:26.694381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.583 [2024-12-09 16:00:26.694388] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.583 [2024-12-09 16:00:26.706649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.583 [2024-12-09 16:00:26.707066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.583 [2024-12-09 16:00:26.707084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.583 [2024-12-09 16:00:26.707092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.583 [2024-12-09 16:00:26.707290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.583 [2024-12-09 16:00:26.707465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.583 [2024-12-09 16:00:26.707475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.583 [2024-12-09 16:00:26.707482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.583 [2024-12-09 16:00:26.707488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.583 6132.60 IOPS, 23.96 MiB/s [2024-12-09T15:00:26.811Z] [2024-12-09 16:00:26.719605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.583 [2024-12-09 16:00:26.720018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.583 [2024-12-09 16:00:26.720036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.583 [2024-12-09 16:00:26.720044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.583 [2024-12-09 16:00:26.720225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.583 [2024-12-09 16:00:26.720403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.583 [2024-12-09 16:00:26.720413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.583 [2024-12-09 16:00:26.720424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.583 [2024-12-09 16:00:26.720431] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.583 [2024-12-09 16:00:26.732687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.583 [2024-12-09 16:00:26.733110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.583 [2024-12-09 16:00:26.733128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.583 [2024-12-09 16:00:26.733136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.583 [2024-12-09 16:00:26.733313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.583 [2024-12-09 16:00:26.733483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.583 [2024-12-09 16:00:26.733493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.583 [2024-12-09 16:00:26.733500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.583 [2024-12-09 16:00:26.733507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.583 [2024-12-09 16:00:26.745622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.583 [2024-12-09 16:00:26.746045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.583 [2024-12-09 16:00:26.746063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.583 [2024-12-09 16:00:26.746070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.583 [2024-12-09 16:00:26.746245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.583 [2024-12-09 16:00:26.746415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.583 [2024-12-09 16:00:26.746425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.583 [2024-12-09 16:00:26.746431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.583 [2024-12-09 16:00:26.746437] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.583 [2024-12-09 16:00:26.758603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.583 [2024-12-09 16:00:26.759004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.583 [2024-12-09 16:00:26.759021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.583 [2024-12-09 16:00:26.759029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.583 [2024-12-09 16:00:26.759197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.583 [2024-12-09 16:00:26.759374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.583 [2024-12-09 16:00:26.759384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.583 [2024-12-09 16:00:26.759390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.583 [2024-12-09 16:00:26.759396] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.583 [2024-12-09 16:00:26.771417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.583 [2024-12-09 16:00:26.771754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.583 [2024-12-09 16:00:26.771771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.583 [2024-12-09 16:00:26.771779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.583 [2024-12-09 16:00:26.771938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.583 [2024-12-09 16:00:26.772099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.583 [2024-12-09 16:00:26.772108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.583 [2024-12-09 16:00:26.772114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.583 [2024-12-09 16:00:26.772120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.583 [2024-12-09 16:00:26.784327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.583 [2024-12-09 16:00:26.784595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.583 [2024-12-09 16:00:26.784613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.583 [2024-12-09 16:00:26.784620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.583 [2024-12-09 16:00:26.784781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.583 [2024-12-09 16:00:26.784942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.583 [2024-12-09 16:00:26.784951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.583 [2024-12-09 16:00:26.784957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.583 [2024-12-09 16:00:26.784963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.583 [2024-12-09 16:00:26.797266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.583 [2024-12-09 16:00:26.797553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.583 [2024-12-09 16:00:26.797570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.583 [2024-12-09 16:00:26.797577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.583 [2024-12-09 16:00:26.797737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.583 [2024-12-09 16:00:26.797897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.583 [2024-12-09 16:00:26.797907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.583 [2024-12-09 16:00:26.797913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.583 [2024-12-09 16:00:26.797919] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.843 [2024-12-09 16:00:26.810326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.844 [2024-12-09 16:00:26.810721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.844 [2024-12-09 16:00:26.810743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.844 [2024-12-09 16:00:26.810751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.844 [2024-12-09 16:00:26.810912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.844 [2024-12-09 16:00:26.811074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.844 [2024-12-09 16:00:26.811083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.844 [2024-12-09 16:00:26.811090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.844 [2024-12-09 16:00:26.811097] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.844 [2024-12-09 16:00:26.823248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.844 [2024-12-09 16:00:26.823598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.844 [2024-12-09 16:00:26.823616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.844 [2024-12-09 16:00:26.823624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.844 [2024-12-09 16:00:26.823785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.844 [2024-12-09 16:00:26.823945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.844 [2024-12-09 16:00:26.823955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.844 [2024-12-09 16:00:26.823961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.844 [2024-12-09 16:00:26.823968] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.844 [2024-12-09 16:00:26.836127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.844 [2024-12-09 16:00:26.836554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.844 [2024-12-09 16:00:26.836600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.844 [2024-12-09 16:00:26.836625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.844 [2024-12-09 16:00:26.837052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.844 [2024-12-09 16:00:26.837215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.844 [2024-12-09 16:00:26.837230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.844 [2024-12-09 16:00:26.837237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.844 [2024-12-09 16:00:26.837260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.844 [2024-12-09 16:00:26.849118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.844 [2024-12-09 16:00:26.849463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.844 [2024-12-09 16:00:26.849510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.844 [2024-12-09 16:00:26.849534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.844 [2024-12-09 16:00:26.850011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.844 [2024-12-09 16:00:26.850176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.844 [2024-12-09 16:00:26.850186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.844 [2024-12-09 16:00:26.850193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.844 [2024-12-09 16:00:26.850199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.844 [2024-12-09 16:00:26.862122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.844 [2024-12-09 16:00:26.862472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.844 [2024-12-09 16:00:26.862490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.844 [2024-12-09 16:00:26.862497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.844 [2024-12-09 16:00:26.862656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.844 [2024-12-09 16:00:26.862816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.844 [2024-12-09 16:00:26.862825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.844 [2024-12-09 16:00:26.862832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.844 [2024-12-09 16:00:26.862838] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.844 [2024-12-09 16:00:26.875057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.844 [2024-12-09 16:00:26.875425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.844 [2024-12-09 16:00:26.875443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.844 [2024-12-09 16:00:26.875450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.844 [2024-12-09 16:00:26.875621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.844 [2024-12-09 16:00:26.875783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.844 [2024-12-09 16:00:26.875792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.844 [2024-12-09 16:00:26.875798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.844 [2024-12-09 16:00:26.875804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.844 [2024-12-09 16:00:26.887971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.844 [2024-12-09 16:00:26.888324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.844 [2024-12-09 16:00:26.888342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.844 [2024-12-09 16:00:26.888349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.844 [2024-12-09 16:00:26.888510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.844 [2024-12-09 16:00:26.888671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.844 [2024-12-09 16:00:26.888681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.844 [2024-12-09 16:00:26.888691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.844 [2024-12-09 16:00:26.888698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.844 [2024-12-09 16:00:26.900920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.844 [2024-12-09 16:00:26.901256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.844 [2024-12-09 16:00:26.901274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.844 [2024-12-09 16:00:26.901282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.844 [2024-12-09 16:00:26.901442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.844 [2024-12-09 16:00:26.901603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.844 [2024-12-09 16:00:26.901612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.844 [2024-12-09 16:00:26.901619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.844 [2024-12-09 16:00:26.901625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.844 [2024-12-09 16:00:26.913952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.844 [2024-12-09 16:00:26.914377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.844 [2024-12-09 16:00:26.914405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.844 [2024-12-09 16:00:26.914412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.844 [2024-12-09 16:00:26.914573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.844 [2024-12-09 16:00:26.914734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.844 [2024-12-09 16:00:26.914744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.844 [2024-12-09 16:00:26.914751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.844 [2024-12-09 16:00:26.914757] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.844 [2024-12-09 16:00:26.926826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.844 [2024-12-09 16:00:26.927223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.844 [2024-12-09 16:00:26.927240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.844 [2024-12-09 16:00:26.927248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.844 [2024-12-09 16:00:26.927407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.844 [2024-12-09 16:00:26.927568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.844 [2024-12-09 16:00:26.927577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.844 [2024-12-09 16:00:26.927583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.844 [2024-12-09 16:00:26.927589] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.844 [2024-12-09 16:00:26.939710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.845 [2024-12-09 16:00:26.940129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.845 [2024-12-09 16:00:26.940175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.845 [2024-12-09 16:00:26.940199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.845 [2024-12-09 16:00:26.940795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.845 [2024-12-09 16:00:26.941322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.845 [2024-12-09 16:00:26.941331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.845 [2024-12-09 16:00:26.941338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.845 [2024-12-09 16:00:26.941346] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.845 [2024-12-09 16:00:26.952528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.845 [2024-12-09 16:00:26.952937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.845 [2024-12-09 16:00:26.952980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.845 [2024-12-09 16:00:26.953006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.845 [2024-12-09 16:00:26.953490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.845 [2024-12-09 16:00:26.953653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.845 [2024-12-09 16:00:26.953662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.845 [2024-12-09 16:00:26.953668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.845 [2024-12-09 16:00:26.953674] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.845 [2024-12-09 16:00:26.965370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.845 [2024-12-09 16:00:26.965782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.845 [2024-12-09 16:00:26.965827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.845 [2024-12-09 16:00:26.965851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.845 [2024-12-09 16:00:26.966320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.845 [2024-12-09 16:00:26.966482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.845 [2024-12-09 16:00:26.966491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.845 [2024-12-09 16:00:26.966498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.845 [2024-12-09 16:00:26.966504] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.845 [2024-12-09 16:00:26.978173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.845 [2024-12-09 16:00:26.978523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.845 [2024-12-09 16:00:26.978540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.845 [2024-12-09 16:00:26.978551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.845 [2024-12-09 16:00:26.978721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.845 [2024-12-09 16:00:26.978890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.845 [2024-12-09 16:00:26.978900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.845 [2024-12-09 16:00:26.978906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.845 [2024-12-09 16:00:26.978913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.845 [2024-12-09 16:00:26.991166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.845 [2024-12-09 16:00:26.991571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.845 [2024-12-09 16:00:26.991588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.845 [2024-12-09 16:00:26.991596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.845 [2024-12-09 16:00:26.991766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.845 [2024-12-09 16:00:26.991935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.845 [2024-12-09 16:00:26.991945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.845 [2024-12-09 16:00:26.991951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.845 [2024-12-09 16:00:26.991957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.845 [2024-12-09 16:00:27.004117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.845 [2024-12-09 16:00:27.004506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.845 [2024-12-09 16:00:27.004551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.845 [2024-12-09 16:00:27.004576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.845 [2024-12-09 16:00:27.005159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.845 [2024-12-09 16:00:27.005624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.845 [2024-12-09 16:00:27.005635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.845 [2024-12-09 16:00:27.005641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.845 [2024-12-09 16:00:27.005648] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.845 [2024-12-09 16:00:27.017073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.845 [2024-12-09 16:00:27.017446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.845 [2024-12-09 16:00:27.017463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.845 [2024-12-09 16:00:27.017471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.845 [2024-12-09 16:00:27.017631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.845 [2024-12-09 16:00:27.017795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.845 [2024-12-09 16:00:27.017804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.845 [2024-12-09 16:00:27.017810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.845 [2024-12-09 16:00:27.017816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.845 [2024-12-09 16:00:27.029928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.845 [2024-12-09 16:00:27.030333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.845 [2024-12-09 16:00:27.030351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.845 [2024-12-09 16:00:27.030359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.845 [2024-12-09 16:00:27.030519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.845 [2024-12-09 16:00:27.030679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.845 [2024-12-09 16:00:27.030688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.845 [2024-12-09 16:00:27.030694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.845 [2024-12-09 16:00:27.030701] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.845 [2024-12-09 16:00:27.042820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.845 [2024-12-09 16:00:27.043232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.845 [2024-12-09 16:00:27.043249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.845 [2024-12-09 16:00:27.043257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.845 [2024-12-09 16:00:27.043415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.845 [2024-12-09 16:00:27.043576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.845 [2024-12-09 16:00:27.043585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.845 [2024-12-09 16:00:27.043591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.845 [2024-12-09 16:00:27.043597] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.845 [2024-12-09 16:00:27.055558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.845 [2024-12-09 16:00:27.055964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.845 [2024-12-09 16:00:27.056009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.845 [2024-12-09 16:00:27.056033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.845 [2024-12-09 16:00:27.056510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.845 [2024-12-09 16:00:27.056682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.845 [2024-12-09 16:00:27.056692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.845 [2024-12-09 16:00:27.056702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.845 [2024-12-09 16:00:27.056709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.845 [2024-12-09 16:00:27.068688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.845 [2024-12-09 16:00:27.069125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.845 [2024-12-09 16:00:27.069142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:31.846 [2024-12-09 16:00:27.069150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:31.846 [2024-12-09 16:00:27.069344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:31.846 [2024-12-09 16:00:27.069548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.846 [2024-12-09 16:00:27.069563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.846 [2024-12-09 16:00:27.069571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.846 [2024-12-09 16:00:27.069578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.106 [2024-12-09 16:00:27.081480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.106 [2024-12-09 16:00:27.081898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.106 [2024-12-09 16:00:27.081916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.106 [2024-12-09 16:00:27.081924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.106 [2024-12-09 16:00:27.082084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.106 [2024-12-09 16:00:27.082268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.106 [2024-12-09 16:00:27.082279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.106 [2024-12-09 16:00:27.082285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.106 [2024-12-09 16:00:27.082292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.106 [2024-12-09 16:00:27.094285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.106 [2024-12-09 16:00:27.094705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.106 [2024-12-09 16:00:27.094722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.106 [2024-12-09 16:00:27.094729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.106 [2024-12-09 16:00:27.095335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.106 [2024-12-09 16:00:27.095840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.106 [2024-12-09 16:00:27.095849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.106 [2024-12-09 16:00:27.095856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.106 [2024-12-09 16:00:27.095862] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.106 [2024-12-09 16:00:27.107163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.106 [2024-12-09 16:00:27.107584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.106 [2024-12-09 16:00:27.107602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.106 [2024-12-09 16:00:27.107610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.106 [2024-12-09 16:00:27.107770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.106 [2024-12-09 16:00:27.107930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.106 [2024-12-09 16:00:27.107939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.106 [2024-12-09 16:00:27.107946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.106 [2024-12-09 16:00:27.107952] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.106 [2024-12-09 16:00:27.119921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.106 [2024-12-09 16:00:27.120275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.106 [2024-12-09 16:00:27.120323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.106 [2024-12-09 16:00:27.120347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.106 [2024-12-09 16:00:27.120780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.106 [2024-12-09 16:00:27.120960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.106 [2024-12-09 16:00:27.120970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.106 [2024-12-09 16:00:27.120976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.106 [2024-12-09 16:00:27.120982] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.106 [2024-12-09 16:00:27.132749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.106 [2024-12-09 16:00:27.133078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.106 [2024-12-09 16:00:27.133096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.106 [2024-12-09 16:00:27.133103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.106 [2024-12-09 16:00:27.133285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.106 [2024-12-09 16:00:27.133456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.106 [2024-12-09 16:00:27.133466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.106 [2024-12-09 16:00:27.133472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.106 [2024-12-09 16:00:27.133478] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.106 [2024-12-09 16:00:27.145619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.106 [2024-12-09 16:00:27.146027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.106 [2024-12-09 16:00:27.146045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.106 [2024-12-09 16:00:27.146055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.106 [2024-12-09 16:00:27.146222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.106 [2024-12-09 16:00:27.146406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.106 [2024-12-09 16:00:27.146415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.106 [2024-12-09 16:00:27.146422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.106 [2024-12-09 16:00:27.146428] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.106 [2024-12-09 16:00:27.158421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.106 [2024-12-09 16:00:27.158843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.106 [2024-12-09 16:00:27.158889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.106 [2024-12-09 16:00:27.158913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.106 [2024-12-09 16:00:27.159512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.106 [2024-12-09 16:00:27.159726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.106 [2024-12-09 16:00:27.159734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.106 [2024-12-09 16:00:27.159740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.106 [2024-12-09 16:00:27.159765] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.106 [2024-12-09 16:00:27.173493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.106 [2024-12-09 16:00:27.174013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.106 [2024-12-09 16:00:27.174057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.106 [2024-12-09 16:00:27.174080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.106 [2024-12-09 16:00:27.174622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.106 [2024-12-09 16:00:27.174881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.106 [2024-12-09 16:00:27.174893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.106 [2024-12-09 16:00:27.174903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.107 [2024-12-09 16:00:27.174913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.107 [2024-12-09 16:00:27.186424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.107 [2024-12-09 16:00:27.186862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.107 [2024-12-09 16:00:27.186907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.107 [2024-12-09 16:00:27.186930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.107 [2024-12-09 16:00:27.187528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.107 [2024-12-09 16:00:27.187772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.107 [2024-12-09 16:00:27.187782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.107 [2024-12-09 16:00:27.187789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.107 [2024-12-09 16:00:27.187796] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.107 [2024-12-09 16:00:27.199183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.107 [2024-12-09 16:00:27.199614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.107 [2024-12-09 16:00:27.199632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.107 [2024-12-09 16:00:27.199640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.107 [2024-12-09 16:00:27.199800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.107 [2024-12-09 16:00:27.199960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.107 [2024-12-09 16:00:27.199969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.107 [2024-12-09 16:00:27.199975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.107 [2024-12-09 16:00:27.199982] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.107 [2024-12-09 16:00:27.211956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.107 [2024-12-09 16:00:27.212369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.107 [2024-12-09 16:00:27.212387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.107 [2024-12-09 16:00:27.212394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.107 [2024-12-09 16:00:27.212554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.107 [2024-12-09 16:00:27.212716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.107 [2024-12-09 16:00:27.212725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.107 [2024-12-09 16:00:27.212731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.107 [2024-12-09 16:00:27.212737] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.107 [2024-12-09 16:00:27.224759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.107 [2024-12-09 16:00:27.225170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.107 [2024-12-09 16:00:27.225212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.107 [2024-12-09 16:00:27.225261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.107 [2024-12-09 16:00:27.225825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.107 [2024-12-09 16:00:27.225986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.107 [2024-12-09 16:00:27.225993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.107 [2024-12-09 16:00:27.226002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.107 [2024-12-09 16:00:27.226008] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.107 [2024-12-09 16:00:27.237615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.107 [2024-12-09 16:00:27.237980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.107 [2024-12-09 16:00:27.237998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.107 [2024-12-09 16:00:27.238006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.107 [2024-12-09 16:00:27.238175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.107 [2024-12-09 16:00:27.238370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.107 [2024-12-09 16:00:27.238380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.107 [2024-12-09 16:00:27.238387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.107 [2024-12-09 16:00:27.238394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.107 [2024-12-09 16:00:27.250638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.107 [2024-12-09 16:00:27.251069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.107 [2024-12-09 16:00:27.251088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.107 [2024-12-09 16:00:27.251096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.107 [2024-12-09 16:00:27.251277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.107 [2024-12-09 16:00:27.251452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.107 [2024-12-09 16:00:27.251461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.107 [2024-12-09 16:00:27.251468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.107 [2024-12-09 16:00:27.251475] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.107 [2024-12-09 16:00:27.263641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.107 [2024-12-09 16:00:27.264067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.107 [2024-12-09 16:00:27.264117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.107 [2024-12-09 16:00:27.264141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.107 [2024-12-09 16:00:27.264737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.107 [2024-12-09 16:00:27.265214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.107 [2024-12-09 16:00:27.265228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.107 [2024-12-09 16:00:27.265235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.107 [2024-12-09 16:00:27.265241] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.107 [2024-12-09 16:00:27.276464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.107 [2024-12-09 16:00:27.276814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.107 [2024-12-09 16:00:27.276830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.107 [2024-12-09 16:00:27.276837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.107 [2024-12-09 16:00:27.276997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.107 [2024-12-09 16:00:27.277157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.107 [2024-12-09 16:00:27.277166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.107 [2024-12-09 16:00:27.277173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.107 [2024-12-09 16:00:27.277180] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.107 [2024-12-09 16:00:27.289194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.107 [2024-12-09 16:00:27.289596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.107 [2024-12-09 16:00:27.289642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.107 [2024-12-09 16:00:27.289665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.107 [2024-12-09 16:00:27.290175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.107 [2024-12-09 16:00:27.290366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.107 [2024-12-09 16:00:27.290376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.107 [2024-12-09 16:00:27.290382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.107 [2024-12-09 16:00:27.290389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.107 [2024-12-09 16:00:27.302036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.107 [2024-12-09 16:00:27.302456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.107 [2024-12-09 16:00:27.302473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.107 [2024-12-09 16:00:27.302480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.107 [2024-12-09 16:00:27.302640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.107 [2024-12-09 16:00:27.302802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.107 [2024-12-09 16:00:27.302811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.107 [2024-12-09 16:00:27.302818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.107 [2024-12-09 16:00:27.302823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.108 [2024-12-09 16:00:27.314871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.108 [2024-12-09 16:00:27.315301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.108 [2024-12-09 16:00:27.315347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.108 [2024-12-09 16:00:27.315385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.108 [2024-12-09 16:00:27.315914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.108 [2024-12-09 16:00:27.316075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.108 [2024-12-09 16:00:27.316083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.108 [2024-12-09 16:00:27.316089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.108 [2024-12-09 16:00:27.316094] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.108 [2024-12-09 16:00:27.327637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.108 [2024-12-09 16:00:27.327995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.108 [2024-12-09 16:00:27.328013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.108 [2024-12-09 16:00:27.328021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.108 [2024-12-09 16:00:27.328191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.108 [2024-12-09 16:00:27.328388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.108 [2024-12-09 16:00:27.328400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.108 [2024-12-09 16:00:27.328410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.108 [2024-12-09 16:00:27.328421] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.367 [2024-12-09 16:00:27.340555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.367 [2024-12-09 16:00:27.340978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.367 [2024-12-09 16:00:27.340995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.368 [2024-12-09 16:00:27.341003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.368 [2024-12-09 16:00:27.341164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.368 [2024-12-09 16:00:27.341354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.368 [2024-12-09 16:00:27.341364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.368 [2024-12-09 16:00:27.341371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.368 [2024-12-09 16:00:27.341378] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.368 [2024-12-09 16:00:27.353377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.368 [2024-12-09 16:00:27.353792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.368 [2024-12-09 16:00:27.353809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.368 [2024-12-09 16:00:27.353816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.368 [2024-12-09 16:00:27.353976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.368 [2024-12-09 16:00:27.354140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.368 [2024-12-09 16:00:27.354150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.368 [2024-12-09 16:00:27.354156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.368 [2024-12-09 16:00:27.354162] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.368 [2024-12-09 16:00:27.366106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.368 [2024-12-09 16:00:27.366501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.368 [2024-12-09 16:00:27.366518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.368 [2024-12-09 16:00:27.366526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.368 [2024-12-09 16:00:27.366685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.368 [2024-12-09 16:00:27.366846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.368 [2024-12-09 16:00:27.366856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.368 [2024-12-09 16:00:27.366862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.368 [2024-12-09 16:00:27.366869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.368 [2024-12-09 16:00:27.378887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.368 [2024-12-09 16:00:27.379282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.368 [2024-12-09 16:00:27.379299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.368 [2024-12-09 16:00:27.379307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.368 [2024-12-09 16:00:27.379467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.368 [2024-12-09 16:00:27.379627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.368 [2024-12-09 16:00:27.379636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.368 [2024-12-09 16:00:27.379642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.368 [2024-12-09 16:00:27.379649] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2151413 Killed "${NVMF_APP[@]}" "$@" 00:27:32.368 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:32.368 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:32.368 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:32.368 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:32.368 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.368 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2152734 00:27:32.368 [2024-12-09 16:00:27.391886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.368 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:32.368 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2152734 00:27:32.368 [2024-12-09 16:00:27.392287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.368 [2024-12-09 16:00:27.392305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.368 [2024-12-09 16:00:27.392313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.368 [2024-12-09 16:00:27.392487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.368 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2152734 ']' 00:27:32.368 [2024-12-09 16:00:27.392663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.368 [2024-12-09 16:00:27.392673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.368 [2024-12-09 16:00:27.392680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.368 [2024-12-09 16:00:27.392686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.368 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.368 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:32.368 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.368 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:32.368 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.368 [2024-12-09 16:00:27.404906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.368 [2024-12-09 16:00:27.405267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.368 [2024-12-09 16:00:27.405285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.368 [2024-12-09 16:00:27.405292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.368 [2024-12-09 16:00:27.405465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.368 [2024-12-09 16:00:27.405639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.368 [2024-12-09 16:00:27.405648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.368 [2024-12-09 16:00:27.405655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.368 [2024-12-09 16:00:27.405661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.368 [2024-12-09 16:00:27.417933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.368 [2024-12-09 16:00:27.418337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.368 [2024-12-09 16:00:27.418354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.368 [2024-12-09 16:00:27.418362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.368 [2024-12-09 16:00:27.418530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.368 [2024-12-09 16:00:27.418699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.368 [2024-12-09 16:00:27.418707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.368 [2024-12-09 16:00:27.418717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.368 [2024-12-09 16:00:27.418724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.368 [2024-12-09 16:00:27.430991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.368 [2024-12-09 16:00:27.431419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.368 [2024-12-09 16:00:27.431436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.368 [2024-12-09 16:00:27.431444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.368 [2024-12-09 16:00:27.431613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.368 [2024-12-09 16:00:27.431783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.368 [2024-12-09 16:00:27.431792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.368 [2024-12-09 16:00:27.431798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.368 [2024-12-09 16:00:27.431805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.368 [2024-12-09 16:00:27.438956] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:27:32.368 [2024-12-09 16:00:27.438994] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.368 [2024-12-09 16:00:27.443934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.368 [2024-12-09 16:00:27.444358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.368 [2024-12-09 16:00:27.444375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.368 [2024-12-09 16:00:27.444382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.368 [2024-12-09 16:00:27.444555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.368 [2024-12-09 16:00:27.444730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.369 [2024-12-09 16:00:27.444738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.369 [2024-12-09 16:00:27.444745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.369 [2024-12-09 16:00:27.444751] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.369 [2024-12-09 16:00:27.456874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.369 [2024-12-09 16:00:27.457231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.369 [2024-12-09 16:00:27.457249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.369 [2024-12-09 16:00:27.457257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.369 [2024-12-09 16:00:27.457431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.369 [2024-12-09 16:00:27.457611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.369 [2024-12-09 16:00:27.457622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.369 [2024-12-09 16:00:27.457629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.369 [2024-12-09 16:00:27.457635] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.369 [2024-12-09 16:00:27.469965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.369 [2024-12-09 16:00:27.470402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.369 [2024-12-09 16:00:27.470419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.369 [2024-12-09 16:00:27.470427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.369 [2024-12-09 16:00:27.470608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.369 [2024-12-09 16:00:27.470777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.369 [2024-12-09 16:00:27.470785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.369 [2024-12-09 16:00:27.470792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.369 [2024-12-09 16:00:27.470799] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.369 [2024-12-09 16:00:27.483018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.369 [2024-12-09 16:00:27.483429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.369 [2024-12-09 16:00:27.483446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.369 [2024-12-09 16:00:27.483453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.369 [2024-12-09 16:00:27.483623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.369 [2024-12-09 16:00:27.483792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.369 [2024-12-09 16:00:27.483800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.369 [2024-12-09 16:00:27.483806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.369 [2024-12-09 16:00:27.483812] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.369 [2024-12-09 16:00:27.495930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.369 [2024-12-09 16:00:27.496359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.369 [2024-12-09 16:00:27.496376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.369 [2024-12-09 16:00:27.496384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.369 [2024-12-09 16:00:27.496558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.369 [2024-12-09 16:00:27.496732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.369 [2024-12-09 16:00:27.496740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.369 [2024-12-09 16:00:27.496747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.369 [2024-12-09 16:00:27.496755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.369 [2024-12-09 16:00:27.509023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.369 [2024-12-09 16:00:27.509359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.369 [2024-12-09 16:00:27.509376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.369 [2024-12-09 16:00:27.509385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.369 [2024-12-09 16:00:27.509559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.369 [2024-12-09 16:00:27.509741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.369 [2024-12-09 16:00:27.509751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.369 [2024-12-09 16:00:27.509758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.369 [2024-12-09 16:00:27.509765] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.369 [2024-12-09 16:00:27.517539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:32.369 [2024-12-09 16:00:27.522102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.369 [2024-12-09 16:00:27.522559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.369 [2024-12-09 16:00:27.522577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.369 [2024-12-09 16:00:27.522586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.369 [2024-12-09 16:00:27.522769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.369 [2024-12-09 16:00:27.522938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.369 [2024-12-09 16:00:27.522948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.369 [2024-12-09 16:00:27.522955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.369 [2024-12-09 16:00:27.522961] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.369 [2024-12-09 16:00:27.535002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.369 [2024-12-09 16:00:27.535358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.369 [2024-12-09 16:00:27.535375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.369 [2024-12-09 16:00:27.535383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.369 [2024-12-09 16:00:27.535552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.369 [2024-12-09 16:00:27.535721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.369 [2024-12-09 16:00:27.535730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.369 [2024-12-09 16:00:27.535736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.369 [2024-12-09 16:00:27.535743] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.369 [2024-12-09 16:00:27.547940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.369 [2024-12-09 16:00:27.548296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.369 [2024-12-09 16:00:27.548313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.369 [2024-12-09 16:00:27.548320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.369 [2024-12-09 16:00:27.548489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.369 [2024-12-09 16:00:27.548659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.369 [2024-12-09 16:00:27.548668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.369 [2024-12-09 16:00:27.548674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.369 [2024-12-09 16:00:27.548680] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.369 [2024-12-09 16:00:27.556771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.369 [2024-12-09 16:00:27.556795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.369 [2024-12-09 16:00:27.556802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.369 [2024-12-09 16:00:27.556809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.369 [2024-12-09 16:00:27.556815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.369 [2024-12-09 16:00:27.558069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.369 [2024-12-09 16:00:27.558174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.369 [2024-12-09 16:00:27.558176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:32.369 [2024-12-09 16:00:27.560990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.369 [2024-12-09 16:00:27.561438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.369 [2024-12-09 16:00:27.561459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.369 [2024-12-09 16:00:27.561468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.369 [2024-12-09 16:00:27.561643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.369 [2024-12-09 16:00:27.561817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.369 [2024-12-09 16:00:27.561827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.369 [2024-12-09 16:00:27.561835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.369 [2024-12-09 16:00:27.561841] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.369 [2024-12-09 16:00:27.574097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.370 [2024-12-09 16:00:27.574564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.370 [2024-12-09 16:00:27.574584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.370 [2024-12-09 16:00:27.574592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.370 [2024-12-09 16:00:27.574767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.370 [2024-12-09 16:00:27.574943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.370 [2024-12-09 16:00:27.574958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.370 [2024-12-09 16:00:27.574965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.370 [2024-12-09 16:00:27.574972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.370 [2024-12-09 16:00:27.587210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.370 [2024-12-09 16:00:27.587667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.370 [2024-12-09 16:00:27.587687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.370 [2024-12-09 16:00:27.587694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.370 [2024-12-09 16:00:27.587868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.370 [2024-12-09 16:00:27.588043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.370 [2024-12-09 16:00:27.588051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.370 [2024-12-09 16:00:27.588059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.370 [2024-12-09 16:00:27.588065] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.629 [2024-12-09 16:00:27.600375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.629 [2024-12-09 16:00:27.600748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.629 [2024-12-09 16:00:27.600767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.629 [2024-12-09 16:00:27.600775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.629 [2024-12-09 16:00:27.600951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.629 [2024-12-09 16:00:27.601126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.629 [2024-12-09 16:00:27.601135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.629 [2024-12-09 16:00:27.601142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.629 [2024-12-09 16:00:27.601149] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.629 [2024-12-09 16:00:27.613409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.629 [2024-12-09 16:00:27.613777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.629 [2024-12-09 16:00:27.613797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.629 [2024-12-09 16:00:27.613805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.629 [2024-12-09 16:00:27.613979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.629 [2024-12-09 16:00:27.614156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.629 [2024-12-09 16:00:27.614164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.629 [2024-12-09 16:00:27.614171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.629 [2024-12-09 16:00:27.614178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.629 [2024-12-09 16:00:27.626424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.629 [2024-12-09 16:00:27.626859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.629 [2024-12-09 16:00:27.626876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.629 [2024-12-09 16:00:27.626884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.629 [2024-12-09 16:00:27.627058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.629 [2024-12-09 16:00:27.627237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.629 [2024-12-09 16:00:27.627246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.629 [2024-12-09 16:00:27.627253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.629 [2024-12-09 16:00:27.627260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.629 [2024-12-09 16:00:27.639652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.629 [2024-12-09 16:00:27.640058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.629 [2024-12-09 16:00:27.640074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.629 [2024-12-09 16:00:27.640082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.629 [2024-12-09 16:00:27.640316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.629 [2024-12-09 16:00:27.640502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.629 [2024-12-09 16:00:27.640511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.629 [2024-12-09 16:00:27.640518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.629 [2024-12-09 16:00:27.640524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.629 [2024-12-09 16:00:27.652756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.629 [2024-12-09 16:00:27.653162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.629 [2024-12-09 16:00:27.653179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.630 [2024-12-09 16:00:27.653186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.630 [2024-12-09 16:00:27.653364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.630 [2024-12-09 16:00:27.653538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.630 [2024-12-09 16:00:27.653546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.630 [2024-12-09 16:00:27.653553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.630 [2024-12-09 16:00:27.653559] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.630 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:32.630 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:32.630 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:32.630 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:32.630 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.630 [2024-12-09 16:00:27.665780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.630 [2024-12-09 16:00:27.666188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.630 [2024-12-09 16:00:27.666205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.630 [2024-12-09 16:00:27.666213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.630 [2024-12-09 16:00:27.666389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.630 [2024-12-09 16:00:27.666564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.630 [2024-12-09 16:00:27.666573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.630 [2024-12-09 16:00:27.666579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.630 [2024-12-09 16:00:27.666585] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.630 [2024-12-09 16:00:27.678828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.630 [2024-12-09 16:00:27.679222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.630 [2024-12-09 16:00:27.679239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.630 [2024-12-09 16:00:27.679247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.630 [2024-12-09 16:00:27.679420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.630 [2024-12-09 16:00:27.679593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.630 [2024-12-09 16:00:27.679601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.630 [2024-12-09 16:00:27.679608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.630 [2024-12-09 16:00:27.679614] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.630 [2024-12-09 16:00:27.691885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.630 [2024-12-09 16:00:27.692182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.630 [2024-12-09 16:00:27.692201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.630 [2024-12-09 16:00:27.692210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.630 [2024-12-09 16:00:27.692390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.630 [2024-12-09 16:00:27.692567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.630 [2024-12-09 16:00:27.692576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.630 [2024-12-09 16:00:27.692583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.630 [2024-12-09 16:00:27.692590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.630 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:32.630 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:32.630 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.630 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.630 [2024-12-09 16:00:27.701912] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.630 [2024-12-09 16:00:27.704982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.630 [2024-12-09 16:00:27.705391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.630 [2024-12-09 16:00:27.705408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.630 [2024-12-09 16:00:27.705416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.630 [2024-12-09 16:00:27.705589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.630 [2024-12-09 16:00:27.705764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.630 [2024-12-09 16:00:27.705773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.630 [2024-12-09 16:00:27.705779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.630 [2024-12-09 16:00:27.705785] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.630 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.630 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:32.630 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.630 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.630 5110.50 IOPS, 19.96 MiB/s [2024-12-09T15:00:27.858Z] [2024-12-09 16:00:27.717982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.630 [2024-12-09 16:00:27.718277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.630 [2024-12-09 16:00:27.718294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.630 [2024-12-09 16:00:27.718302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.630 [2024-12-09 16:00:27.718475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.630 [2024-12-09 16:00:27.718650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.630 [2024-12-09 16:00:27.718658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.630 [2024-12-09 16:00:27.718665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.630 [2024-12-09 16:00:27.718672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.630 [2024-12-09 16:00:27.731085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.630 [2024-12-09 16:00:27.731501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.630 [2024-12-09 16:00:27.731518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.630 [2024-12-09 16:00:27.731526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.630 [2024-12-09 16:00:27.731700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.630 [2024-12-09 16:00:27.731878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.630 [2024-12-09 16:00:27.731886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.631 [2024-12-09 16:00:27.731893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.631 [2024-12-09 16:00:27.731899] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.631 Malloc0 00:27:32.631 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.631 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:32.631 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.631 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.631 [2024-12-09 16:00:27.744131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.631 [2024-12-09 16:00:27.744567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.631 [2024-12-09 16:00:27.744584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.631 [2024-12-09 16:00:27.744591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.631 [2024-12-09 16:00:27.744764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.631 [2024-12-09 16:00:27.744938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.631 [2024-12-09 16:00:27.744946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.631 [2024-12-09 16:00:27.744953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.631 [2024-12-09 16:00:27.744959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.631 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.631 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:32.631 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.631 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.631 [2024-12-09 16:00:27.757184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.631 [2024-12-09 16:00:27.757620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.631 [2024-12-09 16:00:27.757637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x916aa0 with addr=10.0.0.2, port=4420 00:27:32.631 [2024-12-09 16:00:27.757645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916aa0 is same with the state(6) to be set 00:27:32.631 [2024-12-09 16:00:27.757818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x916aa0 (9): Bad file descriptor 00:27:32.631 [2024-12-09 16:00:27.757993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.631 [2024-12-09 16:00:27.758001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.631 [2024-12-09 16:00:27.758009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.631 [2024-12-09 16:00:27.758015] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.631 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.631 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:32.631 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.631 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.631 [2024-12-09 16:00:27.762486] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.631 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.631 16:00:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2151821 00:27:32.631 [2024-12-09 16:00:27.770231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.631 [2024-12-09 16:00:27.799385] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:34.500 5855.71 IOPS, 22.87 MiB/s [2024-12-09T15:00:31.105Z] 6549.88 IOPS, 25.59 MiB/s [2024-12-09T15:00:32.041Z] 7106.33 IOPS, 27.76 MiB/s [2024-12-09T15:00:32.977Z] 7545.00 IOPS, 29.47 MiB/s [2024-12-09T15:00:33.913Z] 7904.09 IOPS, 30.88 MiB/s [2024-12-09T15:00:34.849Z] 8195.17 IOPS, 32.01 MiB/s [2024-12-09T15:00:35.783Z] 8438.85 IOPS, 32.96 MiB/s [2024-12-09T15:00:37.160Z] 8640.57 IOPS, 33.75 MiB/s [2024-12-09T15:00:37.160Z] 8819.93 IOPS, 34.45 MiB/s 00:27:41.932 Latency(us) 00:27:41.932 [2024-12-09T15:00:37.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.932 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:41.932 Verification LBA range: start 0x0 length 0x4000 00:27:41.932 Nvme1n1 : 15.01 8825.28 34.47 11067.88 0.00 6415.11 417.40 13356.86 00:27:41.932 [2024-12-09T15:00:37.160Z] =================================================================================================================== 00:27:41.932 [2024-12-09T15:00:37.160Z] Total : 8825.28 34.47 11067.88 0.00 6415.11 417.40 13356.86 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:41.932 rmmod nvme_tcp 00:27:41.932 rmmod nvme_fabrics 00:27:41.932 rmmod nvme_keyring 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2152734 ']' 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2152734 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2152734 ']' 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2152734 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:41.932 16:00:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2152734 00:27:41.932 16:00:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:41.932 16:00:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:41.932 16:00:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2152734' 00:27:41.932 killing process with pid 2152734 00:27:41.932 16:00:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2152734 00:27:41.932 16:00:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2152734 00:27:42.191 16:00:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:42.192 16:00:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:42.192 16:00:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:42.192 16:00:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:42.192 16:00:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:42.192 16:00:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:42.192 16:00:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:42.192 16:00:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:42.192 16:00:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:42.192 16:00:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.192 16:00:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.192 16:00:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.096 16:00:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:44.096 00:27:44.096 real 0m25.858s 00:27:44.096 user 1m0.101s 00:27:44.096 sys 0m6.747s 00:27:44.096 16:00:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:44.096 16:00:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:44.096 ************************************ 00:27:44.096 END TEST nvmf_bdevperf 00:27:44.096 ************************************ 00:27:44.096 16:00:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:44.096 16:00:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:44.096 16:00:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:44.096 16:00:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.356 ************************************ 00:27:44.356 START TEST nvmf_target_disconnect 00:27:44.356 ************************************ 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:44.356 * Looking for test storage... 00:27:44.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:44.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.356 --rc genhtml_branch_coverage=1 00:27:44.356 --rc genhtml_function_coverage=1 00:27:44.356 --rc genhtml_legend=1 00:27:44.356 --rc geninfo_all_blocks=1 00:27:44.356 --rc geninfo_unexecuted_blocks=1 00:27:44.356 00:27:44.356 ' 00:27:44.356 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:44.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.357 --rc genhtml_branch_coverage=1 00:27:44.357 --rc genhtml_function_coverage=1 00:27:44.357 --rc genhtml_legend=1 00:27:44.357 --rc geninfo_all_blocks=1 00:27:44.357 --rc geninfo_unexecuted_blocks=1 00:27:44.357 00:27:44.357 ' 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:44.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.357 --rc genhtml_branch_coverage=1 00:27:44.357 --rc genhtml_function_coverage=1 00:27:44.357 --rc genhtml_legend=1 00:27:44.357 --rc geninfo_all_blocks=1 00:27:44.357 --rc geninfo_unexecuted_blocks=1 00:27:44.357 00:27:44.357 ' 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:44.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.357 --rc genhtml_branch_coverage=1 00:27:44.357 --rc genhtml_function_coverage=1 00:27:44.357 --rc genhtml_legend=1 00:27:44.357 --rc geninfo_all_blocks=1 00:27:44.357 --rc geninfo_unexecuted_blocks=1 00:27:44.357 00:27:44.357 ' 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:44.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:44.357 16:00:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.925 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:50.926 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:50.926 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:50.926 Found net devices under 0000:af:00.0: cvl_0_0 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:50.926 Found net devices under 0000:af:00.1: cvl_0_1 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:50.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:27:50.926 00:27:50.926 --- 10.0.0.2 ping statistics --- 00:27:50.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.926 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:50.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:27:50.926 00:27:50.926 --- 10.0.0.1 ping statistics --- 00:27:50.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.926 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:50.926 ************************************ 00:27:50.926 START TEST nvmf_target_disconnect_tc1 00:27:50.926 ************************************ 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:50.926 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:50.927 [2024-12-09 16:00:45.569923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.927 [2024-12-09 16:00:45.569961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b410 with addr=10.0.0.2, port=4420 00:27:50.927 [2024-12-09 16:00:45.569983] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:50.927 [2024-12-09 16:00:45.569992] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:50.927 [2024-12-09 16:00:45.569998] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:50.927 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:50.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:50.927 Initializing NVMe Controllers 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:50.927 00:27:50.927 real 0m0.120s 00:27:50.927 user 0m0.051s 00:27:50.927 sys 0m0.069s 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.927 ************************************ 00:27:50.927 END TEST nvmf_target_disconnect_tc1 00:27:50.927 ************************************ 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:50.927 ************************************ 00:27:50.927 START TEST nvmf_target_disconnect_tc2 00:27:50.927 ************************************ 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2157863 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2157863 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2157863 ']' 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.927 [2024-12-09 16:00:45.703351] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:27:50.927 [2024-12-09 16:00:45.703392] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.927 [2024-12-09 16:00:45.781944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:50.927 [2024-12-09 16:00:45.822798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:50.927 [2024-12-09 16:00:45.822831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:50.927 [2024-12-09 16:00:45.822838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:50.927 [2024-12-09 16:00:45.822844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:50.927 [2024-12-09 16:00:45.822849] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:50.927 [2024-12-09 16:00:45.824329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:50.927 [2024-12-09 16:00:45.824459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:50.927 [2024-12-09 16:00:45.824565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:50.927 [2024-12-09 16:00:45.824566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.927 Malloc0 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.927 [2024-12-09 16:00:45.989710] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.927 16:00:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.927 16:00:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.927 16:00:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:50.927 16:00:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.927 16:00:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.927 16:00:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.927 16:00:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:50.927 16:00:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.927 16:00:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.927 [2024-12-09 16:00:46.018678] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.927 16:00:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.927 16:00:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:50.927 16:00:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.927 16:00:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.927 16:00:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.927 16:00:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2157886 00:27:50.927 16:00:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:50.927 16:00:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:52.833 16:00:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2157863 00:27:52.833 16:00:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:52.833 Read completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.833 Read completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.833 Read completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.833 Read completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.833 Read completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.833 Read completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.833 Read completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.833 Read completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.833 Read completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.833 Read completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.833 Read completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.833 Read completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.833 Write completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.833 Write completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.833 Read completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.833 Read completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.833 Read completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.833 Write completed with error (sct=0, sc=8) 00:27:52.833 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 [2024-12-09 16:00:48.046673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Read completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 Write completed with error (sct=0, sc=8) 00:27:52.834 starting I/O failed 00:27:52.834 [2024-12-09 16:00:48.046888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.834 [2024-12-09 16:00:48.047179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.047197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.047313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.047325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.047468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.047479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.047664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.047696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.047929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.047961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.048086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.048118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.048330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.048364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.048502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.048547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.048693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.048704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.048785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.048795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.048945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.048955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.049191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.049231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.049426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.049461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.049704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.049736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.049935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.049968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.050148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.050181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.050452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.050491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.050634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.050665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.050822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.050833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.051068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.051101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.051238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.051271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.834 [2024-12-09 16:00:48.051522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.834 [2024-12-09 16:00:48.051561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.834 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.051660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.051673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.051821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.051832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.052006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.052016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.052212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.052229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.052375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.052404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.052651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.052685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.052960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.052991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.053282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.053317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.053531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.053564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Write completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Write completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Write completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Write completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Write completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Write completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Write completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Write completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Write completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Write completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Write completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Write completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Read completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 Write completed with error (sct=0, sc=8) 00:27:52.835 starting I/O failed 00:27:52.835 [2024-12-09 16:00:48.054210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.835 [2024-12-09 16:00:48.054492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.054546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.054817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.054852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.054987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.055020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.055247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.055281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.055425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.055459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.055645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.055677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.055825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.055855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.056126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.056159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.056378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.056410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.056603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.056635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.056758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.056791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.057017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.057047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.057208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.057241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.057377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.057404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.057528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.057554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.057663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.057689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.057881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.057911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.058089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.058115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.058234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.835 [2024-12-09 16:00:48.058265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.835 qpair failed and we were unable to recover it. 00:27:52.835 [2024-12-09 16:00:48.058455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.836 [2024-12-09 16:00:48.058482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.836 qpair failed and we were unable to recover it. 00:27:52.836 [2024-12-09 16:00:48.058602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.836 [2024-12-09 16:00:48.058629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.836 qpair failed and we were unable to recover it. 00:27:52.836 [2024-12-09 16:00:48.058757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.836 [2024-12-09 16:00:48.058785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.836 qpair failed and we were unable to recover it. 00:27:52.836 [2024-12-09 16:00:48.059066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.836 [2024-12-09 16:00:48.059094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.836 qpair failed and we were unable to recover it. 00:27:52.836 [2024-12-09 16:00:48.059359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.836 [2024-12-09 16:00:48.059388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.836 qpair failed and we were unable to recover it. 00:27:52.836 [2024-12-09 16:00:48.059558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.836 [2024-12-09 16:00:48.059585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.836 qpair failed and we were unable to recover it. 00:27:52.836 [2024-12-09 16:00:48.059882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.836 [2024-12-09 16:00:48.059910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:52.836 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.060153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.060181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.060385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.060412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.060547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.060575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.060781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.060808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.060987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.061012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.061132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.061158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.061268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.061297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.061416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.061442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.061550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.061578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.061697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.061722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.061896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.061923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.062035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.062063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.062251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.062292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.062532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.062564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.062761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.062796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.063065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.063098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.063361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.063396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.063585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.063617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.063752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.063785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.063999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.064031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.064299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.064333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.064540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.064571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.064799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.064831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.064956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.064987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.065235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.065268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.065403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.065434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.065665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.065698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.065826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.065860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.066118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.066150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.066369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.066404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.066592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.117 [2024-12-09 16:00:48.066624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.117 qpair failed and we were unable to recover it. 00:27:53.117 [2024-12-09 16:00:48.066874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.066906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.067148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.067181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.067375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.067408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.067601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.067633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.067850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.067882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.068168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.068212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.068402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.068434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.068628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.068661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.068786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.068817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.069085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.069119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.069248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.069283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.069532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.069565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.069747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.069780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.070046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.070079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.070336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.070370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.070563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.070595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.070745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.070778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.070957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.070988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.071128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.071160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.071325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.071360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.071536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.071567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.071741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.071773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.071908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.071946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.072130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.072164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.072378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.072410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.072654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.072688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.072988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.073020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.073234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.073267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.073464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.073498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.073741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.073773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.074013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.074045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.074341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.074377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.074527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.074560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.074733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.074764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.075060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.075093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.118 [2024-12-09 16:00:48.075342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.118 [2024-12-09 16:00:48.075375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.118 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.075528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.075561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.075702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.075735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.075953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.075992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.076177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.076209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.076488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.076522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.076663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.076696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.077003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.077035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.077237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.077271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.077512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.077545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.077787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.077819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.077958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.077991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.078107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.078139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.078444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.078478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.078586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.078626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.078758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.078788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.078981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.079011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.079211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.079267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.079439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.079470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.079733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.079766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.080053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.080084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.080364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.080398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.080592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.080625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.080811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.080847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.081039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.081072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.081191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.081231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.081431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.081463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.081716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.081748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.082012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.082045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.082238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.082272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.082462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.082494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.082668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.082701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.082942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.082975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.083240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.083274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.083455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.083488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.083664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.119 [2024-12-09 16:00:48.083696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.119 qpair failed and we were unable to recover it. 00:27:53.119 [2024-12-09 16:00:48.083949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.083981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.084271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.084305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.084513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.084545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.084814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.084845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.085019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.085050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.085288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.085323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.085569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.085600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.085838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.085870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.086133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.086166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.086382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.086416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.086680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.086712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.086976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.087006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.087304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.087338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.087638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.087672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.087956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.087988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.088192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.088251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.088387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.088419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.088612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.088643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.088769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.088801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.089038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.089076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.089339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.089374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.089614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.089647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.089836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.089869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.090052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.090083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.090256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.090291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.090498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.090530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.090667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.090699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.090832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.090863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.091049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.091080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.091331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.091366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.091611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.091644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.091768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.091799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.091984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.092016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.092282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.092316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.092497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.120 [2024-12-09 16:00:48.092530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.120 qpair failed and we were unable to recover it. 00:27:53.120 [2024-12-09 16:00:48.092773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.092804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.092976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.093009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.093182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.093214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.093410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.093442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.093644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.093676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.093809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.093841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.094016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.094047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.094312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.094345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.094543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.094576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.094772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.094804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.094976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.095008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.095251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.095300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.095493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.095524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.095669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.095701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.095956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.095988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.096193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.096253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.096482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.096516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.096741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.096772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.096904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.096936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.097250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.097284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.097479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.097510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.097804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.097836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.098101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.098133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.098413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.098446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.098637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.098681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.099035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.099108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.099308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.099348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.099495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.099528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.099670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.099702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.099992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.100023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.100292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.100326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.100609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.100640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.100925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.100957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.121 qpair failed and we were unable to recover it. 00:27:53.121 [2024-12-09 16:00:48.101238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.121 [2024-12-09 16:00:48.101271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.101474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.101506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.101773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.101806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.101997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.102027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.102238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.102272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.102424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.102465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.102658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.102689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.102827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.102858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.103049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.103081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.103334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.103367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.103478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.103508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.103660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.103692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.103821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.103852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.104036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.104067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.104308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.104341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.104545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.104577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.104769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.104802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.105043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.105074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.105225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.105259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.105461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.105493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.105673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.105704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.105957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.105988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.106234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.106267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.106514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.106546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.106727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.106759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.106969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.106999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.122 qpair failed and we were unable to recover it. 00:27:53.122 [2024-12-09 16:00:48.107182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.122 [2024-12-09 16:00:48.107214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.107407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.107438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.107575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.107606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.107817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.107849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.108043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.108074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.108270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.108304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.108499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.108530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.108773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.108806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.108996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.109027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.109150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.109181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.109377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.109410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.109611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.109643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.109938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.109969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.110210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.110251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.110382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.110414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.110672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.110704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.111004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.111035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.111286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.111320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.111468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.111499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.111687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.111724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.112039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.112070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.112317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.112351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.112545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.112577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.112821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.112853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.113041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.113071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.113257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.113290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.113486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.113516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.113719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.113752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.113942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.113974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.114240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.114274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.114478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.114509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.114637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.114668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.114920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.123 [2024-12-09 16:00:48.114950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.123 qpair failed and we were unable to recover it. 00:27:53.123 [2024-12-09 16:00:48.115133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.115165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.115448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.115481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.115689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.115720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.115925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.115958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.116252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.116286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.116493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.116524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.116724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.116756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.117046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.117078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.117300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.117333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.117510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.117542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.117731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.117764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.118027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.118059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.118243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.118277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Write completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Write completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Write completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Write completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Write completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Write completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Write completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Write completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Write completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Write completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Write completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Write completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 Read completed with error (sct=0, sc=8) 00:27:53.124 starting I/O failed 00:27:53.124 [2024-12-09 16:00:48.118914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:53.124 [2024-12-09 16:00:48.119265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.119330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.119565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.119599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.119813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.119845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.120037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.120069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.120257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.120292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.120486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.120517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.120712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.120744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.120976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.124 [2024-12-09 16:00:48.121008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.124 qpair failed and we were unable to recover it. 00:27:53.124 [2024-12-09 16:00:48.121301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.121333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.121596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.121628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.121910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.121942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.122152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.122183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.122384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.122416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.122613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.122644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.122916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.122948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.123243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.123277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.123475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.123506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.123702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.123733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.124018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.124049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.124384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.124417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.124612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.124644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.124789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.124821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.125013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.125045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.125366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.125399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.125590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.125622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.125754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.125785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.126015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.126047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.126252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.126284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.126494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.126525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.126791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.126823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.127097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.127128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.127263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.127295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.127513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.127545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.127687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.127725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.127846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.127878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.128165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.128197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.128404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.128436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.128681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.128712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.128911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.128942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.129133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.129164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.129429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.129462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.125 [2024-12-09 16:00:48.129728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.125 [2024-12-09 16:00:48.129759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.125 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.129995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.130026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.130229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.130262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.130399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.130431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.130627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.130658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.130808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.130839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.130967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.130999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.131186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.131227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.131464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.131497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.131694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.131727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.131863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.131894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.132162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.132193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.132418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.132452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.132593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.132625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.132808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.132838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.133024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.133056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.133347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.133381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.133570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.133603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.133811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.133842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.134094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.134129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.134338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.134372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.134573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.134605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.134860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.134893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.135153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.135187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.135332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.135363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.135514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.135546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.135688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.135719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.135951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.135982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.136120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.136151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.136385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.136418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.136689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.136720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.136844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.136875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.137007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.137045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.137337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.137370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.137570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.137601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.126 qpair failed and we were unable to recover it. 00:27:53.126 [2024-12-09 16:00:48.137786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.126 [2024-12-09 16:00:48.137817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.138084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.138114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.138389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.138421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.138647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.138677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.138906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.138938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.139210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.139252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.139454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.139485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.139675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.139706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.139895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.139926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.140191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.140246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.140433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.140465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.140699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.140732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.140969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.141001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.141142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.141173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.141368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.141402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.141533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.141564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.141712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.141743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.142033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.142065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.142264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.142297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.142439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.142470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.142613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.142644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.142828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.142859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.143105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.143137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.143427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.143460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.143773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.143805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.144000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.144031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.144168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.144199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.144344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.144377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.127 qpair failed and we were unable to recover it. 00:27:53.127 [2024-12-09 16:00:48.144668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.127 [2024-12-09 16:00:48.144700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.145004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.145035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.145226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.145259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.145509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.145541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.145690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.145721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.145872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.145903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.146098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.146128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.146327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.146360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.146497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.146528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.146745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.146782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.146978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.147009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.147197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.147237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.147439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.147471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.147579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.147610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.147834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.147866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.148111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.148142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.148331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.148365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.148573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.148605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.148763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.148796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.149017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.149050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.149353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.149386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.149593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.149625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.149843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.149874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.150070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.150102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.150326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.150360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.150487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.150517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.150729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.150763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.150980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.151012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.151191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.151230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.151421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.151453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.151655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.151688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.151895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.151926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.152041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.152073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.152266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.152300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.128 qpair failed and we were unable to recover it. 00:27:53.128 [2024-12-09 16:00:48.152523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.128 [2024-12-09 16:00:48.152556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.152808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.152840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.153039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.153071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.153351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.153384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.153521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.153553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.153752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.153784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.153966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.153999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.154175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.154208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.154428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.154460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.154616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.154647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.154839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.154872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.155119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.155151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.155293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.155325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.155466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.155499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.155656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.155688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.155812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.155849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.156043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.156074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.156224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.156256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.156408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.156440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.156583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.156615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.156754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.156787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.156898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.156929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.157231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.157264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.157445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.157477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.157621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.157652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.157787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.157818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.158067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.158098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.158297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.158330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.158530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.158562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.158701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.158734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.158945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.158977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.159292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.159327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.159587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.159618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.159904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.159934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.160212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.160254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.160436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.160470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.160675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.160706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.160974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.161006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.161237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.161268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.129 [2024-12-09 16:00:48.161462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.129 [2024-12-09 16:00:48.161493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.129 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.161695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.161726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.162036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.162067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.162350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.162384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.162522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.162554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.162753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.162785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.163032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.163064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.163373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.163407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.163608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.163640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.163779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.163811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.164078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.164110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.164329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.164362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.164566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.164597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.164812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.164844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.165028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.165060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.165251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.165285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.165491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.165533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.165675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.165708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.166000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.166032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.166310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.166345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.166546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.166579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.166798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.166830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.167061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.167092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.167373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.167407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.167539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.167570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.167774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.167805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.168024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.168055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.168324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.168359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.168514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.168545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.168796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.168828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.169085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.169117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.169258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.169292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.169421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.169454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.169679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.169710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.170049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.170081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.170349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.170381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.170564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.170595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.170867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.170899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.171095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.171125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.130 qpair failed and we were unable to recover it. 00:27:53.130 [2024-12-09 16:00:48.171308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.130 [2024-12-09 16:00:48.171342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.171545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.171577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.171779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.171810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.172102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.172133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.172256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.172289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.172493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.172526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.172731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.172762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.173037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.173068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.173280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.173313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.173591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.173622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.173766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.173797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.173927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.173956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.174101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.174133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.174357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.174391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.174647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.174678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.174918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.174950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.175269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.175304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.175487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.175524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.175734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.175766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.176031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.176064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.176273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.176306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.176489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.176520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.176660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.176693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.176894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.176925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.177132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.177165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.177403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.177436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.177577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.177608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.177792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.177824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.177978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.178010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.178302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.178335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.178530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.178562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.178775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.178807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.179027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.179057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.179287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.179321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.179576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.179607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.179804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.179835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.180032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.180064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.180330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.180365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.180503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.131 [2024-12-09 16:00:48.180534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.131 qpair failed and we were unable to recover it. 00:27:53.131 [2024-12-09 16:00:48.180738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.180769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.180925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.180956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.181260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.181293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.181579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.181610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.181867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.181898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.182157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.182189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.182414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.182448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.182664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.182695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.183011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.183043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.183304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.183338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.183519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.183551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.183855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.183886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.184084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.184116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.184256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.184290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.184497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.184528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.184717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.184748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.184953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.184984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.185181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.185213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.185507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.185545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.185753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.185785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.186005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.186037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.186253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.186286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.186414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.186445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.186721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.186752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.186959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.186989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.187200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.187239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.187425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.187456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.187777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.187809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.188061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.188093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.188334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.188367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.188513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.188544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.188734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.188766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.188966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.188997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.132 qpair failed and we were unable to recover it. 00:27:53.132 [2024-12-09 16:00:48.189281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.132 [2024-12-09 16:00:48.189316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.189542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.189573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.189711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.189742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.189869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.189900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.190113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.190145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.190337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.190369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.190578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.190609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.190799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.190831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.191053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.191084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.191268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.191301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.191439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.191470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.191616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.191647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.192027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.192107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.192367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.192407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.192641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.192675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.192817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.192850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.193050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.193083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.193292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.193326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.193584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.193616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.193823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.193856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.194129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.194162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.194398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.194431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.194643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.194675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.194900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.194932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.195184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.195227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.195456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.195489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.195632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.195665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.195869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.195900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.196039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.196071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.196322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.196357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.196633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.196666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.196939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.196971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.197263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.197297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.197525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.197558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.197681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.197713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.197846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.197880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.198140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.198173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.198401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.198434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.198630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.198662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.198903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.133 [2024-12-09 16:00:48.198940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.133 qpair failed and we were unable to recover it. 00:27:53.133 [2024-12-09 16:00:48.199169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.199202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.199347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.199379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.199577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.199608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.199857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.199891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.200194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.200237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.200516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.200549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.200744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.200776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.200977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.201009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.201231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.201265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.201398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.201431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.201582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.201614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.201797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.201829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.202050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.202082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.202349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.202383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.202496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.202527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.202731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.202764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.203055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.203087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.203302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.203335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.203567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.203599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.203804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.203837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.204091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.204123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.204349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.204381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.204565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.204597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.204823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.204855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.205046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.205078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.205278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.205312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.205504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.205536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.205736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.205769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.205995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.206027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.206312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.206347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.206551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.206584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.206823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.206855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.207036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.207069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.207323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.207356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.207483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.207515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.207700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.207732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.207971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.208002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.208276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.208309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.208505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.208537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.134 qpair failed and we were unable to recover it. 00:27:53.134 [2024-12-09 16:00:48.208669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.134 [2024-12-09 16:00:48.208701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.208943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.208976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.209195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.209246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.209393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.209425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.209622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.209655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.209961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.209993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.210293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.210328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.210472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.210505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.210755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.210787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.211088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.211119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.211360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.211394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.211602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.211635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.211764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.211796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.211930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.211963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.212261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.212295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.212502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.212536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.212665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.212697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.212856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.212888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.213073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.213106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.213287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.213320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.213536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.213568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.213695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.213726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.214037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.214070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.214252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.214285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.214445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.214477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.214662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.214694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.214886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.214918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.215066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.215098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.215304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.215344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.215605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.215639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.215970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.216002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.216227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.216260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.216415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.216447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.216559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.216590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.216729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.216761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.216965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.216997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.217272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.217305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.217434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.217466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.217667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.217699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.217823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.135 [2024-12-09 16:00:48.217854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.135 qpair failed and we were unable to recover it. 00:27:53.135 [2024-12-09 16:00:48.218064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.218096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.218298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.218334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.218544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.218576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.218777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.218809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.219128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.219161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.219458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.219492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.219715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.219747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.219958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.219989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.220173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.220204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.220422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.220455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.220654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.220686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.220866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.220898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.221117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.221149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.221354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.221387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.221627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.221660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.221817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.221848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.222059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.222093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.222302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.222338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.222468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.222499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.222644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.222676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.222911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.222943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.223146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.223176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.223419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.223452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.223594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.223627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.223810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.223843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.224112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.224146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.224279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.224311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.224458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.224490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.224743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.224776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.224996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.225028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.225243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.225278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.225460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.136 [2024-12-09 16:00:48.225494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.136 qpair failed and we were unable to recover it. 00:27:53.136 [2024-12-09 16:00:48.225638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.225670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.225883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.225916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.226173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.226206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.226476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.226508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.226664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.226695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.226925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.226958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.227212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.227271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.227472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.227506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.227641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.227673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.227888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.227921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.228127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.228161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.228352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.228387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.228548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.228580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.228860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.228894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.229019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.229053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.229303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.229338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.229641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.229674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.229831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.229864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.230067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.230100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.230416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.230449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.230660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.230693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.230846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.230877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.231098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.231129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.231308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.231341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.231554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.231595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.231874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.231907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.232104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.232136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.232346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.232380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.232584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.232616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.232767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.232799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.233122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.233154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.233464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.233498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.233742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.233774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.234075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.234106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.234392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.234428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.234630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.234661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.234982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.235014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.235225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.235259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.235407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.235439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.137 [2024-12-09 16:00:48.235640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.137 [2024-12-09 16:00:48.235672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.137 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.235868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.235900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.236174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.236206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.236376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.236410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.236609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.236641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.236771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.236803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.237102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.237135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.237463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.237500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.237650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.237682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.237987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.238020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.238240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.238274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.238480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.238513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.238697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.238730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.238965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.238998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.239192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.239234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.239494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.239527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.239687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.239720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.239926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.239959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.240183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.240226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.240372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.240404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.240557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.240589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.240793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.240826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.241054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.241088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.241283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.241319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.241467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.241500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.241652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.241690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.241848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.241891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.242137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.242170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.242378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.242412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.242553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.242585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.242795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.242830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.243011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.243047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.243352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.243388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.243555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.243590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.243783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.243816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.244037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.244069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.244203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.244266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.244522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.138 [2024-12-09 16:00:48.244554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.138 qpair failed and we were unable to recover it. 00:27:53.138 [2024-12-09 16:00:48.244707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.244739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.244980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.245014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.245237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.245271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.245421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.245454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.245605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.245641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.245892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.245925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.246210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.246255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.246410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.246444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.246651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.246683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.246816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.246849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.247070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.247105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.247243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.247278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.247416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.247448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.247665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.247699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.247846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.247878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.248087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.248126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.248267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.248302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.248454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.248486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.248641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.248672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.248866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.248899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.249155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.249188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.249358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.249391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.249542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.249573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.249721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.249754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.250015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.250049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.250253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.250287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.250485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.250518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.250669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.250702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.250831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.250864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.251056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.251147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.251357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.251400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.251575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.251609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.251742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.251775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.251989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.252022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.252213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.252265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.252464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.252497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.252748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.252780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.253094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.253127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.253331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.253366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.139 qpair failed and we were unable to recover it. 00:27:53.139 [2024-12-09 16:00:48.253572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.139 [2024-12-09 16:00:48.253604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.253761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.253793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.254015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.254047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.254365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.254407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.254630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.254662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.254816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.254848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.255118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.255149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.255314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.255348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.255554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.255587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.255738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.255770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.256066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.256099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.256306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.256338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.256474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.256505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.256668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.256700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.256826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.256857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.257050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.257082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.257326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.257361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.257587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.257619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.257780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.257812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.257970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.258003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.258227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.258260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.258421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.258452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.258613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.258644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.258946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.258978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.259281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.259315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.259455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.259486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.259641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.259673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.259883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.259915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.260098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.260129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.260336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.260367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.260569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.260600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.260743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.260776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.261048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.261080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.261278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.261312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.261466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.261498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.261692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.261724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.261854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.261884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.262033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.262066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.262259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.262292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.140 qpair failed and we were unable to recover it. 00:27:53.140 [2024-12-09 16:00:48.262450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.140 [2024-12-09 16:00:48.262481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.262624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.262654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.262788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.262820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.263073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.263104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.263297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.263338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.263464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.263496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.263718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.263748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.264045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.264078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.264378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.264411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.264550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.264582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.264729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.264760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.265021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.265052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.265371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.265404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.265563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.265595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.265739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.265770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.266017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.266050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.266239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.266273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.266424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.266456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.266591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.266622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.266857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.266889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.267078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.267110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.267366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.267397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.267596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.267627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.267915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.267948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.268152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.268184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.268390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.268421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.268566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.268597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.268895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.268928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.269186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.269225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.269382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.269416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.269564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.269597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.269877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.269910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.270164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.270199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.270436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.270469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.141 qpair failed and we were unable to recover it. 00:27:53.141 [2024-12-09 16:00:48.270670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.141 [2024-12-09 16:00:48.270703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.270893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.270925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.271118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.271152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.271340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.271377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.271650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.271683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.271892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.271924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.272129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.272161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.272432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.272465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.272665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.272696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.272932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.272966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.273245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.273285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.273430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.273462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.273670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.273703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.273925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.273957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.274260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.274294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.274530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.274563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.274811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.274843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.275043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.275076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.275363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.275397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.275625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.275657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.275912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.275943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.276150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.276183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.276346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.276379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.276583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.276615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.276879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.276912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.277116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.277149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.277414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.277447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.277614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.277647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.277805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.277837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.278065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.278099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.278321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.278354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.278623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.278656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.278791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.278824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.279094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.279127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.279342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.279375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.279567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.279598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.279855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.279887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.280095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.280128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.280340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.280372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.142 [2024-12-09 16:00:48.280555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.142 [2024-12-09 16:00:48.280587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.142 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.280806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.280839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.281118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.281149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.281297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.281330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.281608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.281639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.281771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.281802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.281928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.281960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.282242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.282275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.282600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.282631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.282929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.282960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.283257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.283290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.283525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.283562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.283743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.283775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.283920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.283952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.284155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.284186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.284409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.284442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.284695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.284726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.285013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.285045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.285266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.285299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.285498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.285529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.285743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.285773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.286051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.286083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.286344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.286378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.286657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.286690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.286885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.286916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.287250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.287283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.287487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.287519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.287777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.287810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.288032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.288064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.288282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.288315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.288575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.288608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.288813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.288845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.289049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.289080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.289325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.289358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.289544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.289575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.289741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.289774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.289972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.290004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.290318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.290351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.290570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.290604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.143 [2024-12-09 16:00:48.290843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.143 [2024-12-09 16:00:48.290875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.143 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.291137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.291169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.291446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.291479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.291614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.291646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.291922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.291954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.292148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.292180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.292386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.292419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.292699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.292731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.292975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.293007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.293286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.293318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.293583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.293615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.293839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.293870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.294085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.294123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.294338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.294371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.294510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.294540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.294732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.294765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.295056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.295089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.295214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.295254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.295460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.295490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.295642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.295674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.295955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.295986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.296251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.296284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.296421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.296452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.296643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.296676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.296946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.296976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.297283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.297316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.297619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.297650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.297809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.297840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.297992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.298024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.298211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.298254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.298523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.298556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.298705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.298738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.299065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.299097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.299344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.299377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.299646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.299678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.299816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.299849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.300031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.300062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.300351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.300385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.300533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.300565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.144 [2024-12-09 16:00:48.300819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.144 [2024-12-09 16:00:48.300851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.144 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.301055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.301087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.301272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.301304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.301438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.301469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.301670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.301704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.301833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.301863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.302163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.302195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.302498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.302531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.302824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.302858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.303154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.303186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.303397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.303431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.303656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.303688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.303954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.303987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.304186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.304228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.304423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.304458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.304596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.304629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.304835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.304868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.305071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.305101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.305322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.305354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.305484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.305515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.305709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.305741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.305962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.305993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.306271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.306304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.306534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.306568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.306853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.306886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.307166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.307197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.307448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.307481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.307695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.307727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.308038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.308072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.308352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.308385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.308569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.308601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.308739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.308772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.309051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.309084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.309244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.309278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.309574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.145 [2024-12-09 16:00:48.309606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.145 qpair failed and we were unable to recover it. 00:27:53.145 [2024-12-09 16:00:48.309742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.309774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.309975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.310007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.310190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.310230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.310436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.310467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.310686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.310719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.311062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.311101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.311316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.311349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.311590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.311621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.311772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.311804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.312081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.312113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.312399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.312433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.312563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.312594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.312726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.312757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.313090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.313123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.313351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.313384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.313592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.313624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.313808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.313839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.314100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.314133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.314395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.314429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.314642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.314674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.314908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.314939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.315164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.315196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.315407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.315440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.315589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.315621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.315800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.315831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.316113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.316145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.316326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.316359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.316565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.316597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.316789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.316821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.317106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.317139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.317344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.317377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.317543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.317574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.317809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.317841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.317970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.318002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.318251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.318285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.318540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.318573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.146 [2024-12-09 16:00:48.318792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.146 [2024-12-09 16:00:48.318824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.146 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.319107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.319139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.319439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.319471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.319672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.319703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.319853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.319884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.320043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.320074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.320324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.320359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.320553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.320585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.320869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.320900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.321201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.321268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.321429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.321461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.321609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.321641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.321858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.321888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.322152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.322184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.322400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.322435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.322562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.322594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.322730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.322760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.322983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.323016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.323245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.323280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.147 [2024-12-09 16:00:48.323473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.147 [2024-12-09 16:00:48.323503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.147 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.323710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.323741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.323944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.323977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.324171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.324202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.324370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.324402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.324603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.324634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.324823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.324854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.325062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.325093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.325306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.325340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.325499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.325530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.325810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.325843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.326049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.326082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.326340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.326374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.326487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.326519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.326681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.326713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.326914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.326957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.327243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.327278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.327447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.327478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.327689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.327720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.328037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.328070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.328357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.328391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.328587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.328619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.328769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.328801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.329021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.329053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.329324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.329358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.329570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.329602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.329733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.329766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.329970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.330002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.330265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.330299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.330509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.330541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.330691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.330728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.331055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.331087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.331322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.331361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.331587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.331622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.331925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.331958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.332175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.332207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.332447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.332479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.332689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.332724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.332987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.333021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.333243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.333278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.333424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.333457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.333611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.333642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.333931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.333963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.334242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.334278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.334492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.334525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.334678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.334711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.467 qpair failed and we were unable to recover it. 00:27:53.467 [2024-12-09 16:00:48.334942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.467 [2024-12-09 16:00:48.334976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.335117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.335148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.335313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.335346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.335651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.335684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.336067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.336101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.336302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.336336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.336536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.336570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.336827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.336861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.337122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.337155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.337423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.337457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.337683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.337716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.338003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.338036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.338348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.338384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.338578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.338617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.338805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.338837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.339092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.339127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.339265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.339298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.339443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.339476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.339681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.339712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.339928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.339961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.340215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.340262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.340470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.340504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.340697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.340728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.340879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.340911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.341171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.341215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.341374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.341406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.341635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.341667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.341848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.341880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.342164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.342198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.342441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.342475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.342614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.342648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.342800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.342834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.343029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.343061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.343204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.343248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.343519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.343552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.343712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.343746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.344000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.344033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.344309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.344346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.344630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.344662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.344819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.344851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.345157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.345189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.345480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.345514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.345743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.345777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.346043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.346075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.346371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.346409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.346621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.346658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.346898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.346931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.347240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.347277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.347494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.347531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.347766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.347800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.347990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.348022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.348249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.468 [2024-12-09 16:00:48.348284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.468 qpair failed and we were unable to recover it. 00:27:53.468 [2024-12-09 16:00:48.348547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.348583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.348786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.348821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.349087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.349120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.349439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.349474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.349632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.349668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.349939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.349972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.350158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.350191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.350424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.350455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.350669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.350704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.350927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.350960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.351225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.351259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.351385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.351419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.351567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.351608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.351814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.351845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.352122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.352155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.352376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.352412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.352641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.352677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.352821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.352853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.353157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.353191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.353422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.353456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.353679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.353711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.353936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.353970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.354247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.354283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.354442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.354474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.354675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.354708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.354922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.354953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.355107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.355139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.355399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.355437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.355654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.355687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.355902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.355934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.356209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.356252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.356456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.356489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.356799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.356834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.356959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.356993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.357251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.357285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.357538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.357571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.357709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.357743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.358022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.358055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.358359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.358394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.358603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.358636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.358847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.358883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.359086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.359120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.359324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.359358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.359507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.359540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.359730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.359763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.360047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.360079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.360329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.360364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.360549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.360582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.360736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.360769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.360958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.360990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.361194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.361234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.361397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.469 [2024-12-09 16:00:48.361430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.469 qpair failed and we were unable to recover it. 00:27:53.469 [2024-12-09 16:00:48.361687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.361727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.361908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.361941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.362079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.362111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.362315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.362352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.362524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.362557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.362750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.362781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.363098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.363131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.363387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.363421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.363649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.363682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.363836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.363869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.364064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.364097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.364213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.364255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.364477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.364510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.364714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.364746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.364981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.365014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.365303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.365338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.365629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.365663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.365812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.365844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.366047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.366080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.366358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.366391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.366554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.366587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.366774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.366809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.366960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.366990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.367215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.367260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.367450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.367483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.367681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.367715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.367875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.367908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.368104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.368136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.368276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.368309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.368524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.368556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.368787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.368820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.369075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.369107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.369404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.369438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.369586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.369618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.369885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.369919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.370046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.370076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.370241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.370277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.370534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.370567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.370775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.370808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.370993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.371024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.371211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.371279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.371414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.371447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.371599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.371632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.371860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.371892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.372040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.470 [2024-12-09 16:00:48.372072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.470 qpair failed and we were unable to recover it. 00:27:53.470 [2024-12-09 16:00:48.372347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.372381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.372528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.372560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.372801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.372834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.373033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.373065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.373331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.373364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.373515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.373549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.373667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.373699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.373979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.374012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.374246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.374280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.374473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.374507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.374764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.374798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.375084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.375117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.375401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.375435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.375568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.375602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.375790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.375822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.376075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.376108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.376362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.376397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.376525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.376558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.376709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.376743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.377038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.377072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.377269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.377303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.377508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.377540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.377749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.377781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.378084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.378116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.378338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.378372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.378575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.378608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.378800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.378833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.378976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.379009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.379192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.379234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.379421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.379453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.379603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.379635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.379917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.379948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.380231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.380265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.380457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.380489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.380810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.380843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.380962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.381000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.381283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.381317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.381525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.381555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.381762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.381794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.381987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.382019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.382210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.382272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.382459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.382493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.382771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.382804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.383092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.383123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.383379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.383413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.383623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.383655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.383806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.383839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.384092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.384124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.384248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.384282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.384484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.384517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.384662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.384696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.384972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.471 [2024-12-09 16:00:48.385005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.471 qpair failed and we were unable to recover it. 00:27:53.471 [2024-12-09 16:00:48.385288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.385321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.385477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.385509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.385692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.385725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.385945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.385977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.386254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.386288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.386524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.386558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.386741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.386775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.387029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.387062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.387247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.387282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.387471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.387503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.387701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.387734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.388016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.388049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.388197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.388259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.388537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.388570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.388793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.388826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.389079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.389113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.389330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.389363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.389524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.389561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.389785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.389820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.390119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.390152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.390356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.390390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.390608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.390640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.390770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.390803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.390927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.390966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.391245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.391280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.391417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.391448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.391584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.391618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.391816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.391849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.392123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.392156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.392440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.392474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.392689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.392724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.393070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.393102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.393375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.393409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.393665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.393697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.393827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.393859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.394072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.394104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.394304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.394337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.394484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.394517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.394726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.394757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.394985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.395018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.395237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.395271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.395402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.395434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.395655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.395688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.395955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.395989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.396185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.396227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.396373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.396406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.396594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.396627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.396816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.396848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.397050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.397082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.397211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.397254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.397473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.397507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.397645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.397677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.472 qpair failed and we were unable to recover it. 00:27:53.472 [2024-12-09 16:00:48.397878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.472 [2024-12-09 16:00:48.397912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.398109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.398140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.398262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.398295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.398456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.398489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.398764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.398798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.399108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.399142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.399327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.399360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.399616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.399647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.399825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.399856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.400064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.400096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.400298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.400333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.400655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.400695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.400889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.400922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.401190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.401231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.401461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.401493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.401647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.401680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.401865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.401897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.402198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.402244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.402500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.402534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.402737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.402769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.402969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.403002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.403244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.403279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.403435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.403467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.403653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.403684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.403964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.403997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.404278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.404312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.404514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.404546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.404692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.404723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.404907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.404939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.405141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.405173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.405390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.405422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.405630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.405663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.405862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.405895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.406114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.406145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.406410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.406444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.406652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.406686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.406907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.406940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.407143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.407177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.407341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.407376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.407628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.407661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.407929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.407962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.408267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.408328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.408580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.408612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.408754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.473 [2024-12-09 16:00:48.408786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.473 qpair failed and we were unable to recover it. 00:27:53.473 [2024-12-09 16:00:48.409050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.409082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.409389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.409422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.409682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.409716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.410022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.410055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.410269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.410305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.410451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.410485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.410693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.410729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.410934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.410974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.411172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.411205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.411448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.411481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.411625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.411657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.411775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.411807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.412004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.412037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.412232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.412266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.412470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.412503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.412624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.412656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.412848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.412881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.413199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.413243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.413427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.413460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.413667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.413699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.413901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.413934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.414140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.414174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.414328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.414362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.414502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.414535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.414744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.414777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.415007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.415040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.415268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.415304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.415507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.415539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.415720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.415752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.415961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.415996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.416297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.416332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.416536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.416570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.416764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.416798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.417080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.417113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.417336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf00460 is same with the state(6) to be set 00:27:53.474 [2024-12-09 16:00:48.417595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.417671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.417905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.417944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.418175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.418208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.418358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.418391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.418601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.418636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.418918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.418950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.419145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.419178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.419338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.419371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.419624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.419657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.419913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.419946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.420133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.420166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.420377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.420411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.420594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.420627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.420890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.420928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.421146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.421181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.421462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.421499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.421716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.474 [2024-12-09 16:00:48.421749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.474 qpair failed and we were unable to recover it. 00:27:53.474 [2024-12-09 16:00:48.421993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.422025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.422231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.422265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.422487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.422521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.422732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.422765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.422991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.423023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.423204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.423266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.423533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.423566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.423734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.423766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.424089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.424122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.424257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.424298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.424553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.424586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.424734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.424767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.424969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.425002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.425134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.425165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.425359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.425390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.425544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.425578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.425710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.425741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.426004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.426036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.426160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.426193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.426361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.426395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.426550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.426584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.426709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.426741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.427017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.427050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.427380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.427416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.427614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.427648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.427863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.427899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.428056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.428090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.428295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.428329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.428531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.428565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.428756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.428789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.428914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.428946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.429130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.429163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.429296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.429331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.429487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.429520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.429723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.429757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.429963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.429997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.430232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.430267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.430465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.430499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.430656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.430689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.430926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.430959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.431158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.431191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.431339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.431372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.431505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.431538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.431693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.431727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.431981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.432014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.432292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.432328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.432476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.432509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.432665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.432698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.433004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.433036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.433182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.433230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.433423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.433455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.433614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.433648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.475 qpair failed and we were unable to recover it. 00:27:53.475 [2024-12-09 16:00:48.433778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.475 [2024-12-09 16:00:48.433812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.433941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.433975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.434254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.434289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.434531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.434565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.434761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.434794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.435048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.435081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.435329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.435364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.435569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.435602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.435752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.435785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.436094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.436126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.436399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.436434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.436650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.436683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.436963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.436996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.437189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.437233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.437375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.437408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.437543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.437576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.437722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.437757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.438009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.438042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.438294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.438330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.438476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.438509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.438703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.438737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.439034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.439068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.439386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.439420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.439675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.439708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.439918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.439951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.440090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.440124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.440311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.440347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.440549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.440583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.440850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.440883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.441081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.441113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.441328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.441363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.441510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.441544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.441744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.441777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.442057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.442091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.442282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.442317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.442458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.442492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.442745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.442778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.443085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.443126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.443335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.443370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.443576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.443609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.443819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.443852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.444053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.444086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.444227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.444261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.444467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.444501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.444631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.444664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.444921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.444955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.445096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.445130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.445246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.445280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.445480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.445513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.445706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.445739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.445976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.446010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.446229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.446265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.476 [2024-12-09 16:00:48.446453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.476 [2024-12-09 16:00:48.446488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.476 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.446639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.446673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.446880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.446912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.447129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.447162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.447318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.447352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.447502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.447535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.447686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.447718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.447944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.447977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.448270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.448305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.448456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.448492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.448610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.448644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.448834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.448866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.449120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.449196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.449453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.449492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.449707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.449740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.449941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.449974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.450108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.450141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.450270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.450306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.450513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.450546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.450680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.450714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.450862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.450895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.451038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.451071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.451224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.451259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.451407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.451441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.451559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.451593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.451742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.451785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.451908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.451941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.452081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.452113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.452255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.452289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.452425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.452458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.452599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.452631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.452773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.452804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.452959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.452993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.453204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.453246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.453382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.453416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.453568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.453604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.453746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.453780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.453965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.453999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.454140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.454171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.454404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.454440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.454571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.454604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.454743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.454777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.454966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.454999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.455149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.455181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.455396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.455430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.455562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.455595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.455723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.455756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.455950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.477 [2024-12-09 16:00:48.455983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.477 qpair failed and we were unable to recover it. 00:27:53.477 [2024-12-09 16:00:48.456097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.456129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.456344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.456379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.456533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.456566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.456767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.456800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.456991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.457069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.457235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.457274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.457400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.457433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.457577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.457614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.457748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.457783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.457970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.458005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.458195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.458242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.458376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.458409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.458550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.458585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.458708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.458743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.458957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.458992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.459193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.459237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.459364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.459397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.459596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.459630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.459846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.459880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.460011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.460044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.460244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.460280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.460494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.460527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.460649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.460683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.460826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.460861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.461055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.461089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.461207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.461252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.461373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.461407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.461532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.461566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.461755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.461789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.461923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.461957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.462096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.462131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.462316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.462357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.462550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.462585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.462728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.462762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.462945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.462979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.463110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.463145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.463291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.463327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.463633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.463670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.463810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.463844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.463972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.464006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.464263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.464299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.464485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.464520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.464639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.464673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.464786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.464820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.464954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.464986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.465180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.465214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.465477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.465511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.465728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.465761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.465875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.465907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.466029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.466061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.466190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.466232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.466430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.466462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.466659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.478 [2024-12-09 16:00:48.466692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.478 qpair failed and we were unable to recover it. 00:27:53.478 [2024-12-09 16:00:48.466808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.466840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.466982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.467015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.467138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.467171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.467363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.467397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.467545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.467578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.467714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.467753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.468006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.468039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.468158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.468191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.468498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.468531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.468712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.468746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.468864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.468897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.469033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.469066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.469185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.469227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.469356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.469390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.469516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.469548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.469730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.469762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.469964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.469996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.470121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.470154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.470283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.470315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.470489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.470563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.470719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.470756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.470896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.470929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.471044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.471077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.471211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.471273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.471413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.471444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.471576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.471607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.471785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.471816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.471998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.472030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.472170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.472202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.472327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.472359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.472505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.472537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.472731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.472764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.472875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.472914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.473035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.473067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.473181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.473214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.473425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.473456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.473569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.473600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.473791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.473823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.474012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.474044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.474165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.474198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.474329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.474362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.474475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.474508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.474702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.474734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.474923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.474954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.475079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.475111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.475303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.475337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.475481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.475513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.475701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.475732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.475942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.475973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.476242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.476275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.476425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.476459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.476718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.476750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.479 [2024-12-09 16:00:48.476949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.479 [2024-12-09 16:00:48.476982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.479 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.477109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.477142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.477370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.477403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.477548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.477580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.477784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.477816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.477942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.477979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.478195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.478236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.478488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.478531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.478730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.478765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.478983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.479018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.479227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.479262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.479386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.479418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.479621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.479653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.479923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.479955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.480204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.480244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.480443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.480475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.480744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.480776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.481057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.481088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.481381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.481414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.481609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.481640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.481793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.481835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.481987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.482018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.482204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.482250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.482511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.482542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.482733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.482764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.482944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.482975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.483173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.483204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.483424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.483457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.483612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.483644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.483778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.483809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.484000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.484031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.484325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.484358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.484539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.484571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.484723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.484754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.485010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.485042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.485238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.485272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.485406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.485438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.485655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.485687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.485826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.485857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.486056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.486088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.486211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.486253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.486431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.486463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.486590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.486622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.486733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.486764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.486955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.480 [2024-12-09 16:00:48.486986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.480 qpair failed and we were unable to recover it. 00:27:53.480 [2024-12-09 16:00:48.487134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.487166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.487357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.487390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.487628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.487660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.487794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.487825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.488106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.488137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.488330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.488362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.488549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.488580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.488831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.488863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.488990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.489022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.489226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.489260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.489407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.489439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.489586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.489618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.489860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.489891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.490174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.490207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.490522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.490556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.490851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.490884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.491142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.491174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.491480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.491515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.491709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.491740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.492006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.492038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.492238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.492271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.492483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.492514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.492646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.492678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.492926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.492959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.493255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.493288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.493585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.493616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.493860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.493891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.494161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.494193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.494461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.494494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.494776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.494809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.495021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.495052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.495373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.495406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.495608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.495640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.495905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.495937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.496201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.496242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.496491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.496522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.496818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.496851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.497123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.497154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.497388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.497421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.497613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.497645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.497946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.497978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.498242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.498275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.498467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.498505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.498757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.498790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.499067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.499099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.499316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.499350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.499549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.499580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.499778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.499810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.500080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.500112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.500370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.500403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.500599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.500631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.500915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.481 [2024-12-09 16:00:48.500947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.481 qpair failed and we were unable to recover it. 00:27:53.481 [2024-12-09 16:00:48.501197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.501238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.501542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.501574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.501723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.501755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.502019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.502052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.502166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.502198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.502416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.502451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.502666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.502703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.502924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.502958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.503158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.503190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.503418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.503451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.503640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.503673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.503926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.503957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.504170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.504201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.504478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.504510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.504785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.504816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.505030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.505061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.505359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.505393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.505664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.505696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.505989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.506021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.506303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.506336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.506616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.506647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.506929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.506961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.507247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.507281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.507561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.507593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.507870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.507901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.508171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.508204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.508406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.508437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.508709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.508740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.508883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.508915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.509133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.509164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.509357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.509395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.509673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.509705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.509905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.509936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.510127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.510158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.510399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.510433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.510692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.510724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.511013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.511044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.511234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.511270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.511577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.511609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.511813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.511846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.512071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.512104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.512421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.512453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.512708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.512740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.513040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.513072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.513279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.513313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.513592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.513625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.513906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.513939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.514199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.514239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.514534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.514567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.514872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.514904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.515164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.515197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.515355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.482 [2024-12-09 16:00:48.515388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.482 qpair failed and we were unable to recover it. 00:27:53.482 [2024-12-09 16:00:48.515691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.515723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.515903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.515935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.516184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.516232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.516373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.516405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.516679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.516711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.516920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.516952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.517186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.517225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.517454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.517486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.517690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.517722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.518024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.518056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.518265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.518300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.518492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.518524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.518652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.518684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.518963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.518995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.519199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.519239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.519423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.519454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.519740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.519773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.519977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.520010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.520191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.520236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.520465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.520496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.520705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.520735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.521031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.521063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.521248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.521281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.521551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.521582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.521795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.521827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.522008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.522038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.522311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.522343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.522545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.522576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.522838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.522870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.523052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.523084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.523274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.523307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.523583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.523615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.523814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.523846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.524051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.524082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.524338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.524371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.524632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.524664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.524960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.524993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.525216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.525258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.525464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.525496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.525773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.525805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.525984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.526015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.526293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.526326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.526511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.526542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.526674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.526708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.526935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.526967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.527273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.527307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.527526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.527558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.527820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.483 [2024-12-09 16:00:48.527852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.483 qpair failed and we were unable to recover it. 00:27:53.483 [2024-12-09 16:00:48.528155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.528187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.528406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.528441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.528635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.528665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.528917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.528949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.529201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.529240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.529459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.529491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.529770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.529801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.530091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.530123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.530342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.530377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.530650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.530682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.530943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.530985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.531273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.531307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.531622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.531654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.531891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.531923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.532145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.532176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.532447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.532482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.532778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.532809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.533033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.533066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.533347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.533380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.533563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.533595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.533887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.533918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.534113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.534145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.534417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.534452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.534635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.534666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.534939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.534972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.535248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.535282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.535570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.535602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.535800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.535833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.536099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.536132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.536341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.536374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.536567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.536597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.536794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.536825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.537008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.537040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.537238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.537272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.537564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.537596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.537799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.537832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.538033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.538065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.538324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.538360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.538487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.538518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.538806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.538839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.539119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.539150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.539335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.539367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.539637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.539670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.539947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.539980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.540269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.540302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.540582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.540614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.540902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.540935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.541214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.541255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.541540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.541571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.541783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.541816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.542099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.542137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.542412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.484 [2024-12-09 16:00:48.542447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.484 qpair failed and we were unable to recover it. 00:27:53.484 [2024-12-09 16:00:48.542673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.542706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.542956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.542988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.543244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.543278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.543580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.543612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.543818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.543851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.544122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.544153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.544440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.544475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.544758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.544789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.545036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.545067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.545346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.545380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.545565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.545595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.545863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.545894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.546090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.546122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.546395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.546429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.546614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.546646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.546898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.546930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.547109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.547141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.547324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.547358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.547618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.547650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.547861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.547894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.548181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.548214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.548426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.548459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.548733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.548766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.548961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.548993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.549109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.549143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.549299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.549332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.549618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.549651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.549951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.549984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.550175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.550207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.550481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.550514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.550805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.550839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.551128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.551161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.551459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.551493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.551782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.551815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.552093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.552125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.552266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.552300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.552578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.552610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.552939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.552972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.553269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.553308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.553490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.553521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.553790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.553822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.554097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.554129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.554442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.554475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.554757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.554789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.555021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.555053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.555279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.555311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.555595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.555627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.555855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.555887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.556165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.556197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.556429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.556463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.556594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.556626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.556909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.485 [2024-12-09 16:00:48.556942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.485 qpair failed and we were unable to recover it. 00:27:53.485 [2024-12-09 16:00:48.557168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.557202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.557485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.557519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.557800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.557832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.558118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.558150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.558426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.558461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.558672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.558703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.558884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.558916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.559130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.559163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.559356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.559390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.559538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.559570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.559776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.559809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.559990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.560022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.560282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.560316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.560525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.560558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.560828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.560860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.561087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.561118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.561364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.561398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.561702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.561735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.561942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.561975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.562157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.562188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.562413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.562447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.562720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.562752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.562878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.562910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.563172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.563203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.563489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.563523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.563805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.563837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.564033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.564071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.564203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.564247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.564444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.564476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.564697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.564729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.564980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.565011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.565279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.565314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.565497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.565531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.565711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.565743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.566028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.566060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.566268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.566303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.566580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.566612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.566892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.566924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.567116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.567148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.567331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.567364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.567578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.567611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.567812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.567845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.568120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.568152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.568345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.568379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.568631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.568664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.568846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.568879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.569076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.569107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.569382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.569417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.569605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.569637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.569837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.569869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.570143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.570176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.570417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.486 [2024-12-09 16:00:48.570451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.486 qpair failed and we were unable to recover it. 00:27:53.486 [2024-12-09 16:00:48.570646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.570679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.570960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.570993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.571259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.571293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.571563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.571597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.571870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.571902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.572107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.572139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.572340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.572374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.572651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.572682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.572830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.572863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.573044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.573076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.573255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.573290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.573562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.573593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.573734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.573767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.573946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.573977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.574254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.574294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.574499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.574530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.574713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.574746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.574960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.574992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.575173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.575206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.575516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.575550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.575784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.575816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.576010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.576043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.576291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.576326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.576522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.576554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.576762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.576794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.576902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.576934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.577138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.577170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.577394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.577426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.577706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.577739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.578043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.578074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.578341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.578375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.578671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.578704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.578997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.579031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.579322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.579355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.579609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.579642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.579946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.579980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.580245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.580277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.580493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.580526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.580711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.580744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.580926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.580958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.581253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.581288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.581479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.581511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.581776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.581808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.581992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.582023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.582274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.582309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.582536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.582569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.582754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.582787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.487 [2024-12-09 16:00:48.583041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.487 [2024-12-09 16:00:48.583073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.487 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.583326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.583361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.583618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.583651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.583761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.583794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.583929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.583960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.584240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.584275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.584555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.584586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.584801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.584844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.585097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.585130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.585425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.585458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.585733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.585766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.586016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.586049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.586242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.586275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.586552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.586583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.586885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.586918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.587132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.587163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.587441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.587475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.587766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.587799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.588021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.588053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.588257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.588289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.588474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.588506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.588797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.588830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.589089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.589121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.589421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.589456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.589737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.589769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.590048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.590080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.590305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.590340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.590621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.590654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.590885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.590918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.591173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.591206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.591401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.591434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.591653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.591687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.591962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.591994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.592204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.592246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.592531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.592563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.592839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.592871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.593003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.593034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.593240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.593275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.593557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.593588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.593868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.593901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.594121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.594154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.594434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.594468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.594768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.594800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.595000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.595033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.595240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.595275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.595457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.595488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.595743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.595775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.596054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.596092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.596276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.596310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.596571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.596602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.596827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.596859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.597136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.597168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.488 qpair failed and we were unable to recover it. 00:27:53.488 [2024-12-09 16:00:48.597400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.488 [2024-12-09 16:00:48.597433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.597575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.597607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.597808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.597841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.598117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.598150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.598390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.598425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.598695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.598726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.598926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.598960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.599231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.599265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.599547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.599580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.599816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.599849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.600097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.600130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.600311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.600345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.600625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.600657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.600861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.600893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.601089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.601120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.601398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.601432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.601651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.601683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.601966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.601999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.602288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.602321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.602617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.602649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.602851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.602884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.603157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.603189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.603434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.603468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.603745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.603778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.603963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.603994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.604185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.604228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.604480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.604512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.604817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.604849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.605132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.605165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.605455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.605489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.605766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.605797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.606050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.606082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.606355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.606389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.606669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.606700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.606982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.607015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.607289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.607329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.607617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.607649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.607833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.607866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.608141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.608172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.608444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.608478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.608774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.608805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.609080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.609113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.609329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.609363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.609615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.609647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.609923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.609955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.610266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.610301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.610503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.610536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.610721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.610754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.610936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.610969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.611249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.611282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.611551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.611584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.611802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.611834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.489 [2024-12-09 16:00:48.612103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.489 [2024-12-09 16:00:48.612135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.489 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.612329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.612363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.612558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.612590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.612771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.612803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.613042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.613075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.613260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.613294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.613577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.613609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.613875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.613908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.614133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.614165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.614400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.614435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.614693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.614726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.614979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.615012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.615290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.615324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.615605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.615636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.615844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.615877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.616151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.616182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.616390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.616424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.616699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.616732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.617007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.617039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.617255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.617287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.617562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.617594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.617859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.617891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.618130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.618162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.618479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.618518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.618746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.618779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.619033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.619066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.619215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.619260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.619563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.619596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.619728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.619760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.619945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.619976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.620175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.620207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.620449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.620481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.620784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.620817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.620999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.621032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.621274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.621309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.621528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.621559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.621784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.621817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.622005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.622038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.622303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.622337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.622536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.622569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.622751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.622784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.622985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.623016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.623215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.623258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.623457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.623489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.623742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.623774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.623958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.623990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.624192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.490 [2024-12-09 16:00:48.624233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.490 qpair failed and we were unable to recover it. 00:27:53.490 [2024-12-09 16:00:48.624365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.624397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.624590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.624622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.624898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.624930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.625209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.625254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.625438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.625470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.625653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.625685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.625910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.625942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.626171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.626203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.626487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.626519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.626773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.626804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.627102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.627133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.627358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.627391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.627598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.627629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.627911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.627948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.628176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.628209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.628491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.628523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.628730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.628763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.628974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.629006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.629256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.629291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.629587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.629620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.629827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.629860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.630143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.630176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.630431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.630465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.630712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.630744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.631047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.631080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.631272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.631307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.631588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.631620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.631824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.631856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.632112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.632144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.632346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.632380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.632641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.632674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.632966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.632999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.633180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.633212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.633472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.633505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.633709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.633741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.633956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.633988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.634189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.634231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.634367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.634400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.634673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.634705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.634922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.634955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.635231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.635265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.635555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.635588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.635856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.635889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.636203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.636254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.636488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.636520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.636832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.636865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.637046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.637079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.637277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.637312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.637591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.637624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.637907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.637940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.638133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.638165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.638461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.491 [2024-12-09 16:00:48.638494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.491 qpair failed and we were unable to recover it. 00:27:53.491 [2024-12-09 16:00:48.638765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.638797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.639092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.639124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.639400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.639434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.639696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.639729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.640032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.640064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.640191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.640232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.640434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.640466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.640653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.640685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.640976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.641008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.641269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.641304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.641502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.641535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.641813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.641845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.642029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.642062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.642285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.642319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.642597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.642629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.642829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.642862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.643167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.643199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.643410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.643443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.643650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.643683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.643823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.643856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.644108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.644141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.644274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.644308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.644586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.644618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.644749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.644782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.645056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.645089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.645351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.645385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.645669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.645701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.645921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.645954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.646203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.646260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.646394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.646427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.646680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.646713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.646911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.646949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.647229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.647263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.647553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.647586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.647851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.647883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.648175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.648208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.648484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.648518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.648727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.648760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.648903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.648936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.649062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.649095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.649360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.649395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.649681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.649714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.649909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.649943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.650066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.650099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.650313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.650347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.650585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.650618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.650883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.650916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.651201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.651258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.651390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.651423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.651674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.651709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.651821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.651854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.652058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.652090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.492 qpair failed and we were unable to recover it. 00:27:53.492 [2024-12-09 16:00:48.652240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.492 [2024-12-09 16:00:48.652274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.652485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.652519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.652772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.652804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.653075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.653108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.653387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.653422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.653706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.653738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.653989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.654022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.654322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.654357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.654501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.654533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.654738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.654770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.655067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.655100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.655394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.655428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.655696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.655730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.655920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.655953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.656178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.656210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.656475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.656508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.656811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.656845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.657045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.657077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.657380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.657414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.657699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.657739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.658039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.658071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.658331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.658365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.658644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.658676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.658892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.658925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.659232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.659265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.659528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.659560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.659840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.659872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.660160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.660191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.660511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.660544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.660824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.660855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.661142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.661178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.661456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.661490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.661683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.661717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.662001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.662034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.662313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.662349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.662478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.662510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.662705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.662737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.663018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.663049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.663350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.663385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.663590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.663624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.663849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.663881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.664161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.664193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.664481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.664515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.664734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.664767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.664986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.665018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.665271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.665305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.665614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.665647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.665863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.493 [2024-12-09 16:00:48.665896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.493 qpair failed and we were unable to recover it. 00:27:53.493 [2024-12-09 16:00:48.666151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.666182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-12-09 16:00:48.666388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.666422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-12-09 16:00:48.666560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.666592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-12-09 16:00:48.666869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.666900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-12-09 16:00:48.667105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.667137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-12-09 16:00:48.667391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.667426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-12-09 16:00:48.667629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.667662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-12-09 16:00:48.667881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.667912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-12-09 16:00:48.668183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.668228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-12-09 16:00:48.668494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.668527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-12-09 16:00:48.668744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.668776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-12-09 16:00:48.669035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.669074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-12-09 16:00:48.669293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.669327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-12-09 16:00:48.669557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.669589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-12-09 16:00:48.669783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.669816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-12-09 16:00:48.670089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.670120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-12-09 16:00:48.670326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.670360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.494 [2024-12-09 16:00:48.670614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.494 [2024-12-09 16:00:48.670646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.494 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.670779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.670812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.671086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.671118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.671385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.671419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.671674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.671706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.672017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.672049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.672239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.672273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.672551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.672584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.672725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.672758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.672958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.672991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.673252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.673286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.673499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.673533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.673734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.673765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.673878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.673911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.674187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.674229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.674363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.674394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.674525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.674558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.674821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.674853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.675064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.675096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.675355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.675391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.675617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.675650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.675935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.675969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.676248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.676283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.813 qpair failed and we were unable to recover it. 00:27:53.813 [2024-12-09 16:00:48.676429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.813 [2024-12-09 16:00:48.676463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.676717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.676750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.677031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.677064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.677343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.677377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.677687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.677721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.677995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.678028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.678166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.678204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.678450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.678487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.678770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.678807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.679081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.679114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.679371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.679409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.679690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.679732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.680028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.680067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.680369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.680407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.680687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.680718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.681004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.681037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.681301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.681338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.681542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.681575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.681777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.681809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.681949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.681982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.682180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.682211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.682543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.682578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.682762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.682796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.682928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.682960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.683141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.683174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.683467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.683504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.683736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.683768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.684049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.684081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.684231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.684266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.684520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.684554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.684736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.684768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.685028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.685061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.685337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.685377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.685584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.685616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.685868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.685900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.686197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.686241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.686524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.686557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.686830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.686863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.687135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.814 [2024-12-09 16:00:48.687166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.814 qpair failed and we were unable to recover it. 00:27:53.814 [2024-12-09 16:00:48.687433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.687465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.687716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.687748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.688054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.688087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.688295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.688329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.688545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.688577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.688729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.688762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.688944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.688975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.689258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.689293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.689566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.689598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.689892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.689923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.690120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.690151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.690420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.690454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.690743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.690781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.691039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.691071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.691291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.691325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.691592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.691624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.691919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.691951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.692231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.692264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.692475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.692507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.692704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.692735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.692953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.692984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.693168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.693200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.693471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.693504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.693702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.693733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.694014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.694045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.694330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.694364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.694647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.694680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.694874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.694906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.695102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.695134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.695316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.695349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.695551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.695583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.695856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.695888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.696141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.696171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.696444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.696477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.696732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.696763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.697063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.697096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.697365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.697398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.697600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.697631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.697908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.815 [2024-12-09 16:00:48.697940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.815 qpair failed and we were unable to recover it. 00:27:53.815 [2024-12-09 16:00:48.698199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.698242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.698509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.698542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.698730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.698761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.699013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.699045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.699260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.699293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.699559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.699592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.699895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.699927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.700190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.700231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.700448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.700480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.700663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.700695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.700877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.700908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.701100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.701132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.701325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.701358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.701609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.701647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.701833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.701863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.702147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.702180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.702390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.702423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.702704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.702736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.702990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.703023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.703263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.703298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.703550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.703583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.703835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.703869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.704141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.704175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.704466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.704499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.704707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.704739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.705005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.705037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.705342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.705375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.705655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.705687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.706053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.706085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.706321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.706355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.706634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.706665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.706966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.706997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.707190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.707247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.707442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.707474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.707747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.707779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.708057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.708088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.708383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.708418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.708627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.708660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.708860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.816 [2024-12-09 16:00:48.708892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.816 qpair failed and we were unable to recover it. 00:27:53.816 [2024-12-09 16:00:48.709102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.709134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.709445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.709479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.709760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.709792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.709999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.710030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.710234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.710267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.710543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.710576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.710857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.710888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.711202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.711244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.711503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.711536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.711718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.711749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.711945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.711978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.712204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.712247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.712530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.712563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.712836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.712869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.713164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.713203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.713370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.713402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.713630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.713662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.713842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.713874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.714149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.714181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.714375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.714408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.714676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.714707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.714914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.714945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.715203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.715261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.715489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.715520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.715720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.715753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.716031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.716063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.716362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.716395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.716577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.716609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.716822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.716854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.717048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.717080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.717358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.717391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.717627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.717660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.717919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.717952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.817 [2024-12-09 16:00:48.718225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.817 [2024-12-09 16:00:48.718259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.817 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.718545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.718576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.718850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.718882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.719069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.719100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.719240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.719273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.719484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.719515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.719721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.719753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.720017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.720049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.720327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.720361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.720649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.720681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.720964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.720995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.721254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.721288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.721429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.721460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.721664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.721696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.721949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.721980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.722239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.722274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.722574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.722606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.722797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.722829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.723085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.723117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.723300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.723334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.723535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.723566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.723844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.723883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.724176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.724207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.724501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.724533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.724815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.724850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.725043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.725075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.725339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.725373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.725574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.725606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.725801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.725833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.726106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.726137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.726319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.726354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.726501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.726534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.726730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.726762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.726978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.727010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.727313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.727346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.727608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.727643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.727940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.727975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.728279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.728314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.728527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.818 [2024-12-09 16:00:48.728561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.818 qpair failed and we were unable to recover it. 00:27:53.818 [2024-12-09 16:00:48.728702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.728734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.728914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.728946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.729202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.729249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.729507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.729541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.729822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.729855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.730135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.730167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.730453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.730488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.730766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.730798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.731059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.731092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.731323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.731359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.731636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.731672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.731928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.731961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.732185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.732230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.732486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.732523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.732820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.732852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.733141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.733185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.733415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.733456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.733764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.733801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.734049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.734086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.734388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.734422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.734687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.734719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.734958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.734990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.735174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.735212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.735431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.735464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.735736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.735767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.735982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.736014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.736201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.736246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.736521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.736552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.736810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.736841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.737047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.737077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.737347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.737380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.737580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.737613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.737875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.737910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.738191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.738231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.738529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.738562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.738825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.738857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.739142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.739174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.739463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.739497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.739774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.819 [2024-12-09 16:00:48.739806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.819 qpair failed and we were unable to recover it. 00:27:53.819 [2024-12-09 16:00:48.740094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.740126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.740328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.740362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.740617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.740649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.740929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.740961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.741073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.741105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.741358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.741391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.741589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.741620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.741897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.741928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.742189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.742228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.742529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.742561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.742823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.742855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.743164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.743196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.743499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.743531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.743797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.743828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.744149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.744180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.744372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.744405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.744677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.744709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.744989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.745021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.745286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.745320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.745624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.745655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.745941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.745973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.746175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.746207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.746493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.746525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.746724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.746761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.746961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.746993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.747270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.747303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.747492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.747523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.747707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.747739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.747943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.747975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.748270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.748303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.748524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.748556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.748837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.748871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.749155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.749186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.749469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.749507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.749815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.749847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.750133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.750165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.750447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.750479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.750698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.750731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.750952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.820 [2024-12-09 16:00:48.750983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.820 qpair failed and we were unable to recover it. 00:27:53.820 [2024-12-09 16:00:48.751264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.751297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.751583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.751614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.751869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.751901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.752172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.752204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.752432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.752465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.752740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.752771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.753054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.753085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.753350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.753383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.753658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.753689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.753962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.753994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.754244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.754277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.754548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.754580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.754877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.754910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.755184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.755215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.755510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.755542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.755822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.755853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.756143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.756175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.756453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.756486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.756735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.756766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.757073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.757104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.757369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.757403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.757685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.757717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.758011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.758043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.758297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.758330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.758512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.758550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.758769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.758801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.758991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.759021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.759229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.759262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.759514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.759546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.759768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.759799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.759981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.760012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.760212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.760253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.760526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.760558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.760838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.760871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.761008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.761040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.761249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.761283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.761491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.761522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.761821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.761854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.762040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.821 [2024-12-09 16:00:48.762072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.821 qpair failed and we were unable to recover it. 00:27:53.821 [2024-12-09 16:00:48.762212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.762254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.762461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.762493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.762794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.762827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.763096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.763127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.763313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.763347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.763538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.763571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.763844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.763876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.764057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.764088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.764338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.764371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.764672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.764704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.764908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.764940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.765167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.765197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.765525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.765559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.765852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.765884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.766075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.766106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.766381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.766414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.766711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.766743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.767012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.767043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.767343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.767376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.767648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.767680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.767869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.767900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.768033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.768065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.768256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.768289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.768561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.768592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.768886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.768919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.769193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.769239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.769366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.769399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.769580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.769612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.769904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.769937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.770227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.770261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.770442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.770473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.770730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.770762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.771055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.771087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.771362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.822 [2024-12-09 16:00:48.771395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.822 qpair failed and we were unable to recover it. 00:27:53.822 [2024-12-09 16:00:48.771612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.771644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.771892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.771923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.772122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.772154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.772406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.772440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.772645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.772676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.772940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.772971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.773274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.773307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.773573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.773604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.773887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.773920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.774208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.774249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.774460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.774491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.774764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.774796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.775059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.775090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.775386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.775419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.775715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.775747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.776017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.776047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.776357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.776391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.776672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.776703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.776890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.776927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.777133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.777165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.777395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.777428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.777610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.777641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.777914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.777945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.778254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.778288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.778535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.778567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.778869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.778902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.779202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.779272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.779422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.779455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.779646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.779678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.779955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.779988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.780171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.780203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.780516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.780548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.780830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.780862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.781141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.781172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.781464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.781497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.781704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.781735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.782004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.782036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.782290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.782325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.782578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.823 [2024-12-09 16:00:48.782609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.823 qpair failed and we were unable to recover it. 00:27:53.823 [2024-12-09 16:00:48.782859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.782890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.783191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.783230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.783420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.783451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.783644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.783675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.783931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.783961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.784262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.784295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.784546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.784578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.784892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.784924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.785211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.785253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.785525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.785557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.785847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.785879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.786063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.786094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.786366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.786399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.786683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.786715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.787002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.787033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.787249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.787282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.787576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.787608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.787831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.787862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.788059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.788089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.788349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.788388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.788582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.788614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.788883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.788915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.789141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.789172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.789457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.789490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.789676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.789707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.789909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.789941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.790208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.790252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.790506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.790537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.790812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.790844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.791117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.791148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.791463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.791497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.791681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.791712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.791994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.792027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.792213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.792256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.792561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.792593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.792856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.792888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.793086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.793118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.793393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.793428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.793631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.824 [2024-12-09 16:00:48.793662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.824 qpair failed and we were unable to recover it. 00:27:53.824 [2024-12-09 16:00:48.793902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.793933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.794207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.794247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.794453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.794485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.794671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.794702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.794885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.794916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.795189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.795239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.795556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.795587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.795884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.795916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.796111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.796143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.796400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.796434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.796616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.796647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.796829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.796860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.797044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.797074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.797280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.797313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.797589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.797620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.797908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.797940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.798128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.798159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.798442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.798475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.798740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.798772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.799029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.799061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.799276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.799314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.799496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.799529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.799722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.799753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.800028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.800059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.800344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.800378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.800661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.800691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.801008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.801040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.801232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.801266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.801452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.801483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.801738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.801770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.802070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.802102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.802370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.802404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.802589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.802621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.802760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.802792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.803083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.803115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.803389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.803423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.803657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.803689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.803988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.804021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.804316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.825 [2024-12-09 16:00:48.804350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.825 qpair failed and we were unable to recover it. 00:27:53.825 [2024-12-09 16:00:48.804532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.804564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.804835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.804868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.805147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.805179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.805466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.805499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.805778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.805810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.806090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.806122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.806407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.806440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.806658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.806689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.806892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.806924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.807208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.807266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.807463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.807495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.807637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.807670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.807870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.807902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.808177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.808208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.808472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.808505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.808686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.808718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.808915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.808945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.809093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.809125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.809379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.809413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.809609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.809641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.809917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.809948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.810238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.810277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.810495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.810526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.810809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.810841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.811066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.811097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.811379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.811414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.811558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.811590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.811843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.811874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.812096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.812128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.812411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.812444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.812717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.812751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.813015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.813047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.813301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.813335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.813594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.813625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.826 [2024-12-09 16:00:48.813806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.826 [2024-12-09 16:00:48.813838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.826 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.814136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.814168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.814372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.814406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.814619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.814650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.814833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.814866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.815068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.815099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.815387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.815421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.815697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.815728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.816036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.816068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.816354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.816387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.816664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.816694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.816898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.816929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.817209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.817255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.817487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.817519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.817773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.817805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.818078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.818109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.818294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.818328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.818452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.818482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.818633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.818665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.818869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.818901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.819083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.819115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.819367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.819400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.819603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.819634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.819771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.819802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.820002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.820033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.820311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.820348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.820550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.820583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.820886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.820924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.821234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.821269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.821549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.821581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.821857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.821890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.822104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.822135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.822436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.822470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.822724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.822755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.823060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.823092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.823377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.823410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.823691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.823723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.823979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.824012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.824210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.824252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.827 [2024-12-09 16:00:48.824542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.827 [2024-12-09 16:00:48.824575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.827 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.824742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.824775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.825033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.825066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.825268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.825302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.825498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.825530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.825810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.825842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.826097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.826128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.826352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.826386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.826642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.826673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.826856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.826887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.827169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.827201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.827436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.827469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.827676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.827707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.827890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.827922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.828228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.828261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.828547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.828579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.828874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.828906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.829182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.829213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.829499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.829530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.829732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.829764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.829895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.829927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.830120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.830152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.830287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.830321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.830459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.830492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.830748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.830780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.831058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.831090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.831314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.831347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.831539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.831570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.831771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.831808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.832062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.832093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.832241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.832275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.832460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.832492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.832718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.832750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.832953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.832985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.833261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.833297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.833483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.833514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.833735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.833767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.834020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.834057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.834318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.834352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.834534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.828 [2024-12-09 16:00:48.834565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.828 qpair failed and we were unable to recover it. 00:27:53.828 [2024-12-09 16:00:48.834805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.834840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.835095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.835128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.835411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.835446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.835594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.835627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.835881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.835913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.836225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.836259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.836471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.836505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.836768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.836801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.837102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.837133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.837357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.837390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.837606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.837638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.837850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.837882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.838157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.838188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.838486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.838519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.838732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.838763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.838968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.838999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.839256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.839289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.839472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.839504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.839710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.839742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.839960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.839992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.840272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.840306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.840507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.840538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.840726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.840758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.841035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.841067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.841329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.841363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.841485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.841517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.841700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.841732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.841869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.841900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.842103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.842141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.842356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.842388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.842643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.842675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.842866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.842897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.843177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.843209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.843530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.843563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.843862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.843896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.844102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.844133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.844348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.844381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.844525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.844557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.829 [2024-12-09 16:00:48.844830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.829 [2024-12-09 16:00:48.844863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.829 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.845140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.845176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.845467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.845503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.845774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.845805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.846099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.846132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.846412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.846445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.846664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.846696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.846954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.846985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.847239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.847272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.847524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.847555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.847719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.847750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.847980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.848013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.848290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.848324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.848633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.848664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.848924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.848955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.849268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.849303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.849522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.849553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.849759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.849792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.849974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.850005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.850327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.850361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.850506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.850538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.850740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.850772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.850962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.850998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.851191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.851236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.851511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.851543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.851748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.851780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.852029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.852062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.852291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.852327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.852550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.852583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.852734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.852765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.853012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.853049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.853336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.853370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.853650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.853681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.853985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.854019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.854286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.854320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.854591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.854622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.854833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.854864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.855087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.855120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.855320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.855355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.830 [2024-12-09 16:00:48.855556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.830 [2024-12-09 16:00:48.855588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.830 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.855905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.855937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.856149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.856181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.856399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.856432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.856628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.856660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.856777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.856809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.857106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.857137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.857344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.857378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.857579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.857611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.857814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.857847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.858166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.858198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.858468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.858501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.858640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.858673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.858970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.859002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.859289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.859323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.859530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.859561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.859789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.859822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.860120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.860151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.860400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.860434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.860724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.860756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.861044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.861077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.861292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.861327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.861513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.861544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.861855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.861887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.862068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.862100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.862325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.862359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.862612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.862644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.862926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.862959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.863239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.863274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.863387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.863419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.863686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.863719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.864000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.864038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.864290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.864323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.864626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.864658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.864877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.831 [2024-12-09 16:00:48.864909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.831 qpair failed and we were unable to recover it. 00:27:53.831 [2024-12-09 16:00:48.865103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.865135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.865413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.865447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.865673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.865704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.865984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.866016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.866301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.866334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.866596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.866628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.866884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.866916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.867197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.867254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.867402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.867436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.867571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.867603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.867886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.867919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.868201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.868244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.868517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.868549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.868858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.868891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.869147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.869178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.869443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.869478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.869682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.869714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.870015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.870048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.870318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.870352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.870534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.870566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.870751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.870783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.871069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.871101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.871356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.871389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.871598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.871630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.871903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.871936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.872139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.872171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.872384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.872415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.872618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.872650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.872854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.872885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.872995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.873026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.873300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.873334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.873471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.873503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.873656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.873688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.873883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.873915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.874138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.874171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.874370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.874402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.874725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.874764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.875029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.875061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.875363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.832 [2024-12-09 16:00:48.875397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.832 qpair failed and we were unable to recover it. 00:27:53.832 [2024-12-09 16:00:48.875560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.875591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.875729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.875760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.876035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.876067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.876200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.876244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.876547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.876579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.876732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.876764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.876990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.877022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.877212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.877259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.877475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.877507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.877703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.877735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.878022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.878054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.878344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.878378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.878679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.878712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.878942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.878974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.879275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.879310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.879575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.879608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.879910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.879941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.880152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.880184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.880489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.880566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.880911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.880946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.881240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.881275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.881525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.881566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.881850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.881882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.882157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.882190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.882485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.882529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.882790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.882822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.883004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.883035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.883327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.883362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.883627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.883659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.883885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.883917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.884227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.884260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.884458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.884491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.884744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.884775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.885077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.885111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.885320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.885354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.885573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.885605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.885788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.885821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.886074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.886106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.886300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.886335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.833 [2024-12-09 16:00:48.886475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.833 [2024-12-09 16:00:48.886506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.833 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.886686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.886718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.886989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.887021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.887233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.887266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.887569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.887602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.887815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.887847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.888147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.888179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.888392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.888426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.888703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.888735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.888856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.888887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.889139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.889171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.889385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.889419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.889718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.889759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.889898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.889931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.890126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.890160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.890423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.890460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.890670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.890704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.890908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.890940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.891130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.891163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.891403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.891437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.891640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.891673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.891942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.891974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.892311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.892347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.892551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.892584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.892795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.892827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.892957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.892989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.893191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.893233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.893489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.893521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.893656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.893688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.893819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.893851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.893990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.894022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.894201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.894242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.894498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.894531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.894655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.894686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.894906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.894939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.895257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.895293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.895482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.895515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.895767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.895799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.895913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.895945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.896125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.834 [2024-12-09 16:00:48.896159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.834 qpair failed and we were unable to recover it. 00:27:53.834 [2024-12-09 16:00:48.896386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.896420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.896604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.896636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.896763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.896794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.896990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.897022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.897207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.897250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.897367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.897399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.897590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.897621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.897895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.897927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.898110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.898142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.898431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.898464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.898649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.898681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.898885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.898918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.899143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.899174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.899381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.899421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.899550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.899582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.899777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.899810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.900013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.900045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.900242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.900274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.900478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.900510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.900762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.900793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.900982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.901014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.901194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.901233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.901422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.901455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.901655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.901686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.901870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.901902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.902096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.902128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.902382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.902416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.902636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.902669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.902920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.902953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.903166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.903197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.903409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.903442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.903734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.903766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.903948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.903980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.904256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.904290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.835 qpair failed and we were unable to recover it. 00:27:53.835 [2024-12-09 16:00:48.904474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.835 [2024-12-09 16:00:48.904506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.904643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.904675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.904925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.904957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.905098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.905129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.905331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.905365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.905549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.905581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.905717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.905754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.906014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.906046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.906241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.906275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.906389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.906422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.906610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.906642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.906830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.906863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.907044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.907076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.907210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.907252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.907366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.907398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.907627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.907660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.907842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.907874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.908077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.908108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.908240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.908274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.908552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.908584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.908787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.908820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.908944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.908976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.909251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.909286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.909506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.909537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.909718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.909751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.909941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.909974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.910267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.910299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.910492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.910523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.910647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.910680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.910900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.910932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.911142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.911175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.911398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.911431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.911634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.911667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.911871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.911902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.912037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.912069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.912249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.912281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.912408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.912440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.912546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.912578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.912698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.912730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.836 [2024-12-09 16:00:48.912905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.836 [2024-12-09 16:00:48.912937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.836 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.913128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.913160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.913442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.913475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.913612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.913644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.913761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.913793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.913914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.913946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.914141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.914173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.914318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.914352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.914563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.914600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.914743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.914775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.915041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.915073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.915263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.915296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.915486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.915517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.915765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.915797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.915996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.916028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.916154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.916186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.916401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.916435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.916570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.916602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.916751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.916782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.916921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.916953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.917129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.917161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.917345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.917379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.917646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.917678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.917932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.917965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.918227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.918260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.918376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.918409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.918591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.918623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.918827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.918860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.919088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.919119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.919270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.919306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.919505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.919536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.919727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.919760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.919955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.919985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.920164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.920197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.920383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.920416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.920663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.920695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.920895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.920927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.921150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.921184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.921472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.921505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.921665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.921698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.837 qpair failed and we were unable to recover it. 00:27:53.837 [2024-12-09 16:00:48.921884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.837 [2024-12-09 16:00:48.921915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.922051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.922084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.922277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.922310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.922430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.922461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.922659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.922690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.922805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.922836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.922970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.923002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.923200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.923264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.923451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.923483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.923667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.923700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.923883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.923916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.924042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.924073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.924286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.924320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.924567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.924600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.924784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.924816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.925009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.925041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.925276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.925309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.925455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.925486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.925661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.925693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.925813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.925845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.926025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.926057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.926186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.926225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.926414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.926446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.926720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.926752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.926946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.926978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.927250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.927284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.927424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.927455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.927591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.927623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.927755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.927786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.927901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.927933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.928186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.928230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.928378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.928413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.928671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.928704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.928914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.928946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.929080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.929112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.929250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.929283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.929551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.929590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.929717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.929749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.929993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.930026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.930205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.838 [2024-12-09 16:00:48.930246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.838 qpair failed and we were unable to recover it. 00:27:53.838 [2024-12-09 16:00:48.930444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.930476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.930728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.930760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.930982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.931015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.931293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.931327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.931593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.931625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.931889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.931921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.932121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.932153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.932340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.932373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.932481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.932514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.932702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.932733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.932858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.932890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.933137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.933170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.933369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.933402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.933576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.933608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.933746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.933778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.933897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.933929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.934182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.934214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.934478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.934510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.934692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.934724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.934863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.934895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.935070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.935103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.935293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.935328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.935508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.935540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.935666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.935699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.935882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.935914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.936205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.936246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.936424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.936456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.936594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.936625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.936872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.936904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.937174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.937206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.937356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.937388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.937660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.937691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.937827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.937859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.938113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.938145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.938323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.938356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.938464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.938496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.938768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.938799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.939042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.939074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.939268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.939303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.839 [2024-12-09 16:00:48.939490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.839 [2024-12-09 16:00:48.939521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.839 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.939717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.939750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.940018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.940049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.940294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.940327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.940524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.940555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.940872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.940904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.941080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.941112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.941296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.941330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.941598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.941630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.941844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.941876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.942070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.942102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.942292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.942325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.942603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.942636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.942832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.942865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.943049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.943079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.943340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.943376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.943568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.943600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.943810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.943841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.944016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.944048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.944233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.944266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.944460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.944492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.944615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.944648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.944838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.944871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.945141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.945173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.945369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.945402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.945543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.945581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.945775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.945807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.946014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.946045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.946162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.946194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.946346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.946378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.946563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.946595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.946811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.946843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.947055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.947087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.947331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.947365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.947583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.947614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.840 qpair failed and we were unable to recover it. 00:27:53.840 [2024-12-09 16:00:48.947800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.840 [2024-12-09 16:00:48.947833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.948020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.948051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.948243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.948277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.948413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.948446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.948638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.948670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.948931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.948962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.949172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.949204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.949406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.949438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.949647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.949679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.949867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.949899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.950095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.950128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.950335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.950369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.950561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.950592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.950727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.950759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.950876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.950908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.951117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.951149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.951283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.951318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.951452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.951482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.951674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.951706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.951896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.951927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.952052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.952084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.952214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.952255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.952452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.952484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.952660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.952690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.952860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.952891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.953069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.953102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.953346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.953379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.953575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.953606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.953788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.953821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.954032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.954063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.954249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.954283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.954538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.954575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.954769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.954800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.954992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.955024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.955148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.955180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.955373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.955406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.955588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.955620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.955814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.955845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.956121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.956154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.841 [2024-12-09 16:00:48.956350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.841 [2024-12-09 16:00:48.956382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.841 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.956558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.956588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.956852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.956884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.957075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.957106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.957237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.957270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.957398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.957429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.957644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.957676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.957805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.957837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.958029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.958061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.958249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.958282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.958405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.958437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.958555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.958587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.958843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.958876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.959011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.959043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.959154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.959186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.959385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.959417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.959605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.959639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.959814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.959845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.960114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.960146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.960440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.960481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.960728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.960760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.961047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.961079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.961194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.961235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.961355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.961406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.961586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.961619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.961807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.961838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.961964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.961996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.962186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.962224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.962412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.962444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.962636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.962669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.962938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.962970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.963233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.963266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.963469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.963501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.963693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.963725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.963898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.963930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.964126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.964156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.964346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.964380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.964502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.964533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.964752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.964783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.964969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.965001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.842 qpair failed and we were unable to recover it. 00:27:53.842 [2024-12-09 16:00:48.965177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.842 [2024-12-09 16:00:48.965209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.965481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.965514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.965690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.965722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.965939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.965970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.966185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.966229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.966351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.966383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.966578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.966610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.966854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.966886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.967071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.967102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.967343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.967377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.967623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.967655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.967759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.967791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.967897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.967929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.968110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.968142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.968336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.968369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.968487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.968517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.968779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.968812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.969028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.969059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.969242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.969275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.969521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.969553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.969743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.969780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.969913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.969945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.970159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.970191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.970463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.970496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.970688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.970721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.970929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.970961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.971177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.971209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.971418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.971450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.971666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.971697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.971886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.971917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.972104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.972136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.972265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.972299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.972482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.972514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.972650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.972682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.972966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.972998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.973241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.973275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.973391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.973423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.973540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.973572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.973753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.973786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.973970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.843 [2024-12-09 16:00:48.974003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.843 qpair failed and we were unable to recover it. 00:27:53.843 [2024-12-09 16:00:48.974246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.974280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.974462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.974493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.974596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.974627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.974842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.974874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.975064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.975096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.975213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.975273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.975399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.975431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.975605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.975642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.975786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.975818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.975934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.975966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.976170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.976201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.976387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.976419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.976657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.976688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.976858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.976889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.977011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.977043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.977169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.977201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.977451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.977483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.977679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.977711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.977820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.977852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.977977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.978010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.978207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.978250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.978509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.978581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.978794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.978829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.979020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.979053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.979275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.979312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.979486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.979518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.979712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.979744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.979916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.979947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.980131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.980162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.980305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.980339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.980530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.980561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.980741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.980772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.980961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.980993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.981161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.981192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.981322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.844 [2024-12-09 16:00:48.981363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.844 qpair failed and we were unable to recover it. 00:27:53.844 [2024-12-09 16:00:48.981572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.981604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.981735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.981766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.981949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.981980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.982167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.982198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.982418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.982449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.982638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.982669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.982880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.982911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.983108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.983140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.983338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.983370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.983499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.983530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.983634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.983665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.983932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.983963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.984140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.984172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.984320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.984352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.984542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.984573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.984769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.984801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.984922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.984952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.985155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.985188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.985409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.985440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.985611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.985642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.985853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.985885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.986124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.986155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.986325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.986357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.986479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.986509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.986632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.986663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.986862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.986892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.987073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.987142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.987306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.987345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.987468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.987500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.987641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.987671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.987887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.987918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.988103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.988135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.988267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.988299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.988497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.988528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.988665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.988697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.988834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.988865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.989148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.989179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.989491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.989527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.845 [2024-12-09 16:00:48.989717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.845 [2024-12-09 16:00:48.989748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.845 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.989942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.989984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.990178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.990211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.990475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.990508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.990691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.990723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.990965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.990998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.991209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.991252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.991389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.991420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.991549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.991582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.991768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.991801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.991985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.992015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.992130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.992161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.992377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.992410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.992650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.992681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.992941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.992972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.993169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.993201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.993417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.993450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.993686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.993717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.993979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.994010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.994289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.994323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.994563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.994596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.994793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.994825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.995000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.995033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.995162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.995194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.995332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.995365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.995481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.995514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.995703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.995734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.995904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.995936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.996184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.996225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.996475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.996507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.996749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.996782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.996997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.997030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.997215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.997267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.997481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.997519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.997759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.997792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.997909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.997940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.998056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.998088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.998232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.998265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.998482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.998514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.846 [2024-12-09 16:00:48.998727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.846 [2024-12-09 16:00:48.998760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.846 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:48.998934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:48.998967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:48.999089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:48.999127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:48.999251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:48.999283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:48.999528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:48.999559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:48.999768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:48.999800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:48.999929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:48.999960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.000229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.000263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.000453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.000484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.000725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.000757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.000933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.000964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.001207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.001246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.001457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.001489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.001607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.001642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.001812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.001843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.001966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.001998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.002189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.002228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.002354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.002386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.002624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.002655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.002851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.002883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.003088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.003119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.003312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.003346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.003533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.003564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.003807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.003839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.003959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.003991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.004191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.004231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.004437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.004468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.004643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.004674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.004861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.004892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.005165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.005197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.005340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.005372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.005562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.005594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.005773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.005805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.006070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.006101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.006231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.006265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.006448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.006478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.006664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.006694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.006869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.006900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.007021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.007052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.847 qpair failed and we were unable to recover it. 00:27:53.847 [2024-12-09 16:00:49.007236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.847 [2024-12-09 16:00:49.007270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.007474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.007506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.007765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.007796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.007986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.008023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.008292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.008327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.008568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.008598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.008803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.008834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.009022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.009053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.009175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.009206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.009466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.009498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.009715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.009746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.010035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.010066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.010241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.010273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.010449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.010480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.010662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.010693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.010790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.010821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.010985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.011017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.011125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.011157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.011400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.011433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.011563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.011596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.011836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.011867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.012143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.012174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.012314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.012348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.012557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.012588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.012722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.012753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.012885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.848 [2024-12-09 16:00:49.012917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:53.848 qpair failed and we were unable to recover it. 00:27:53.848 [2024-12-09 16:00:49.013106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.013137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.013374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.013407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.013622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.013653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.013836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.013867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.013987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.014021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.014150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.014184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.014326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.014360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.014552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.014584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.014819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.014852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.015042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.015073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.015264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.015298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.015467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.015499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.015685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.015717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.015830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.015862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.016097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.016129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.016326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.016361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.016488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.016525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.016709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.016743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.016873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.016907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.017088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.017120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.017292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.017326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.017501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.017533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.017661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.017693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.017821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.017853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.018035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.018068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.018195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.018235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.018369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.018401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.018517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.018549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.018726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.018758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.018944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.018976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.019084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.019116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.019388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.019422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.019633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.159 [2024-12-09 16:00:49.019666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.159 qpair failed and we were unable to recover it. 00:27:54.159 [2024-12-09 16:00:49.019792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.019824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.020007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.020040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.020164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.020196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.020386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.020419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.020608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.020640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.020743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.020775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.020962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.020994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.021186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.021229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.021407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.021439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.021631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.021663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.021957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.021990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.022092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.022130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.022324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.022357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.022624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.022657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.022839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.022871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.023058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.023090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.023205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.023246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.023485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.023518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.023705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.023738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.023930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.023961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.024082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.024127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.024394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.024428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.024555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.024586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.024724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.024756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.024961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.024992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.025207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.025248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.025511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.025545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.025730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.025763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.025970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.026002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.026211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.026270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.026409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.026441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.026699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.026730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.026961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.026993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.027125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.027157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.027401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.027434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.027622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.027655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.027871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.027903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.028034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.028065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.160 [2024-12-09 16:00:49.028345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.160 [2024-12-09 16:00:49.028382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.160 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.028594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.028628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.028850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.028881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.029074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.029107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.029248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.029281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.029478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.029510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.029684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.029717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.029922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.029955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.030083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.030115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.030300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.030334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.030525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.030558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.030665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.030697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.030821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.030853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.030974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.031012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.031194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.031237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.031422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.031453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.031648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.031680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.031885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.031917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.032115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.032147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.032334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.032368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.032637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.032670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.032946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.032977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.033260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.033294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.033402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.033434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.033675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.033706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.033908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.033940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.034067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.034099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.034294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.034329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.034530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.034563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.034748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.034781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.034963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.034995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.035119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.035151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.035409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.035442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.035619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.035651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.035769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.035801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.036039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.036071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.036314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.036348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.036535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.036567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.036740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.036772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.161 [2024-12-09 16:00:49.036970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.161 [2024-12-09 16:00:49.037003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.161 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.037180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.037212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.037402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.037434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.037547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.037579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.037705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.037736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.037843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.037876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.038071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.038103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.038252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.038285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.038460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.038493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.038702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.038734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.038851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.038883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.039083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.039115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.039298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.039331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.039508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.039540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.039777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.039814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.040067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.040099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.040283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.040317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.040513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.040545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.040672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.040704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.040893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.040925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.041195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.041234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.041408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.041440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.041644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.041676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.041808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.041840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.041944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.041976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.042149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.042182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.042410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.042443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.042706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.042738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.042875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.042908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.043034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.043066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.043241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.043273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.043454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.043487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.043751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.043783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.043911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.043943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.044128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.044160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.044310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.044344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.044479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.044511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.044680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.044713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.044838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.044870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.162 [2024-12-09 16:00:49.044991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.162 [2024-12-09 16:00:49.045023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.162 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.045195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.045238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.045352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.045384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.045495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.045527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.045722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.045755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.046024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.046056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.046261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.046294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.046507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.046540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.046730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.046762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.046950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.046980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.047156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.047186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.047319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.047350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.047485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.047516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.047782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.047814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.047926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.047958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.048145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.048183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.048364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.048398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.048587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.048620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.048743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.048776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.048902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.048934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.049103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.049136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.049344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.049377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.049642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.049675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.049844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.049877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.050086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.050118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.050229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.050263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.050390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.050423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.050531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.050564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.050804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.050836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.050959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.050991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.051239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.051273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.051462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.051494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.051605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.051637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.163 [2024-12-09 16:00:49.051829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.163 [2024-12-09 16:00:49.051861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.163 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.052155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.052187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.052380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.052414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.052533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.052565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.052753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.052785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.052956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.052988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.053110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.053143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.053262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.053297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.053489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.053521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.053716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.053749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.053937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.053969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.054180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.054213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.054363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.054396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.054523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.054555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.054740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.054772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.054952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.054984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.055173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.055205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.055352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.055385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.055584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.055616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.055786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.055818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.056011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.056043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.056224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.056258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.056470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.056507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.056753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.056785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.057031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.057063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.057305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.057339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.057524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.057556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.057763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.057795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.058064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.058096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.058203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.058242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.058435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.058468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.058651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.058682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.058790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.058822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.059066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.059098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.059304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.059337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.059523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.059556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.164 [2024-12-09 16:00:49.059696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.164 [2024-12-09 16:00:49.059728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.164 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.059903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.059936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.060056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.060088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.060261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.060294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.060467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.060500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.060700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.060732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.060846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.060878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.061048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.061080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.061324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.061357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.061612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.061644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.061827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.061859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.062122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.062155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.062371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.062404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.062525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.062558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.062796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.062829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.063070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.063102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.063241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.063274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.063511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.063542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.063724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.063757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.063945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.063977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.064166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.064198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.064338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.064371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.064549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.064581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.064696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.064727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.064993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.065025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.065233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.065266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.065391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.065429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.065714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.065746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.065986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.066018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.066266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.066301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.066512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.066545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.066727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.066760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.066897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.066929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.067115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.067148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.067318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.067352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.067487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.067519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.067649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.067681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.067862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.067895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.068142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.165 [2024-12-09 16:00:49.068174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.165 qpair failed and we were unable to recover it. 00:27:54.165 [2024-12-09 16:00:49.068352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.068385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.068498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.068531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.068724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.068756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.069015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.069047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.069260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.069294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.069539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.069572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.069754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.069786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.070046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.070078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.070259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.070292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.070411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.070443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.070700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.070732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.070858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.070890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.071067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.071099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.071359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.071392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.071586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.071618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.071810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.071843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.072016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.072048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.072236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.072269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.072458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.072490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.072621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.072654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.072783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.072816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.073052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.073084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.073294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.073328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.073509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.073541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.073662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.073693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.073831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.073863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.074053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.074086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.074325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.074364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.074539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.074571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.074757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.074789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.074908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.074939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.075133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.075165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.075428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.075461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.075643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.075675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.075850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.075883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.076090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.076122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.076382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.076415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.076665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.076697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.076799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.166 [2024-12-09 16:00:49.076831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.166 qpair failed and we were unable to recover it. 00:27:54.166 [2024-12-09 16:00:49.077102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.077134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.077265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.077298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.077513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.077546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.077721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.077753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.077926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.077959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.078077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.078109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.078309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.078343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.078633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.078665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.078838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.078870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.079062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.079093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.079212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.079256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.079446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.079477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.079684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.079716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.079889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.079920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.080056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.080087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.080249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.080283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.080401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.080432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.080628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.080660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.080792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.080823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.080993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.081024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.081266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.081299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.081541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.081572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.081746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.081777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.081969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.082000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.082122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.082152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.082283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.082316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.082527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.082560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.082753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.082785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.082955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.082993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.083255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.083288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.083463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.083493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.083624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.083655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.083791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.083822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.084020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.084053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.084172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.084204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.084322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.084355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.084544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.084575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.084758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.084790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.085004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.085036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.167 qpair failed and we were unable to recover it. 00:27:54.167 [2024-12-09 16:00:49.085280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.167 [2024-12-09 16:00:49.085315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.085440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.085472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.085711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.085742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.085922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.085954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.086070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.086102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.086204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.086247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.086434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.086467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.086657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.086689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.086867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.086900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.087074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.087106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.087348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.087383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.087577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.087610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.087812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.087844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.088035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.088068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.088251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.088285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.088460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.088492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.088668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.088700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.088895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.088926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.089119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.089150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.089282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.089328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.089612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.089649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.089916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.089952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.090163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.090196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.090428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.090462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.090597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.090629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.090869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.090902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.091155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.091187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.091409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.091442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.091560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.091592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.091727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.091766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.092009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.092041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.092166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.092198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.092312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.092345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.092458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.168 [2024-12-09 16:00:49.092490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.168 qpair failed and we were unable to recover it. 00:27:54.168 [2024-12-09 16:00:49.092596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.092628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.092836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.092869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.092984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.093017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.093200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.093242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.093533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.093565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.093742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.093774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.093895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.093928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.094054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.094086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.094282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.094316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.094561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.094594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.094857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.094889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.095133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.095166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.095309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.095342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.095604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.095636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.095808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.095840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.095967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.095999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.096191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.096231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.096472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.096504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.096672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.096704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.096822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.096854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.097114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.097146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.097392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.097426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.097704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.097738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.097856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.097887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.098075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.098108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.098340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.098374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.098494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.098526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.098705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.098737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.098912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.098945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.099137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.099169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.099377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.099410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.099534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.099567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.099754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.099786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.099922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.099954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.100170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.100202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.100415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.100452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.100688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.100720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.100839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.100871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.101062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.169 [2024-12-09 16:00:49.101094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.169 qpair failed and we were unable to recover it. 00:27:54.169 [2024-12-09 16:00:49.101215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.101265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.101369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.101402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.101508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.101540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.101779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.101811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.101992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.102024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.102216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.102258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.102456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.102489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.102672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.102704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.102883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.102915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.103034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.103066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.103197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.103240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.103415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.103446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.103696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.103728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.103974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.104006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.104111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.104142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.104313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.104347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.104535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.104567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.104752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.104784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.104964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.104996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.105111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.105143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.105267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.105300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.105468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.105500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.105720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.105753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.105932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.105965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.106078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.106110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.106346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.106379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.106513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.106545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.106813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.106846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.106955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.106987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.107223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.107256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.107442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.107475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.107596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.107628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.107748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.107782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.107905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.107937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.108117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.108149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.108374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.108408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.108580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.108617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.108864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.108896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.109024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.109056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.170 qpair failed and we were unable to recover it. 00:27:54.170 [2024-12-09 16:00:49.109260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.170 [2024-12-09 16:00:49.109293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.109465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.109498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.109758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.109790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.109979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.110011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.110248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.110281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.110411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.110444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.110615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.110647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.110837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.110869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.111109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.111142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.111311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.111343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.111457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.111489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.111744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.111777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.111978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.112010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.112193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.112242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.112486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.112520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.112707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.112739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.112915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.112946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.113182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.113214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.113480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.113513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.113687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.113719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.113823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.113854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.114040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.114073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.114318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.114351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.114480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.114512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.114742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.114814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.115082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.115118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.115263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.115300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.115531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.115563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.115826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.115858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.115995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.116026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.116291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.116325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.116535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.116568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.116770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.116801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.116983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.117016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.117263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.117298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.117435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.117467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.117573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.117604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.117793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.117825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.118052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.118085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.171 [2024-12-09 16:00:49.118233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.171 [2024-12-09 16:00:49.118267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.171 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.118389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.118420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.118557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.118588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.118831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.118864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.119105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.119138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.119308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.119342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.119470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.119502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.119682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.119714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.119841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.119872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.120072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.120103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.120354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.120386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.120499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.120531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.120717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.120755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.121036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.121068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.121190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.121230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.121344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.121374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.121637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.121668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.121843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.121873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.122083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.122114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.122379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.122411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.122648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.122679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.122850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.122882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.123121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.123153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.123348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.123382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.123515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.123547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.123787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.123818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.123995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.124028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.124200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.124241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.124431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.124463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.124650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.124683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.124921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.124954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.125225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.125258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.125362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.125394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.125505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.125537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.125743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.125775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.172 [2024-12-09 16:00:49.125962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.172 [2024-12-09 16:00:49.125995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.172 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.126249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.126281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.126481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.126512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.126696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.126727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.126899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.126930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.127124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.127156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.127384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.127418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.127611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.127644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.127814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.127846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.128084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.128115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.128288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.128321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.128450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.128480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.128604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.128634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.128845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.128881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.129073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.129102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.129234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.129266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.129507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.129541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.129800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.129829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.130005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.130047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.130253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.130285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.130484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.130515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.130700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.130730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.130846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.130875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.131081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.131111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.131243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.131274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.131511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.131542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.131726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.131755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.131871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.131900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.132071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.132100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.132356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.132387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.132557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.132587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.132827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.132857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.133081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.133110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.133326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.133356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.133617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.133649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.133833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.133863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.133978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.134007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.134267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.134301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.134474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.134506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.134625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.134656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.173 qpair failed and we were unable to recover it. 00:27:54.173 [2024-12-09 16:00:49.134851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.173 [2024-12-09 16:00:49.134883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.135075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.135106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.135282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.135313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.135609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.135640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.135824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.135855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.136095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.136131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.136342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.136374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.136480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.136511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.136692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.136722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.136899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.136931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.137169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.137199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.137379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.137409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.137606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.137640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.137826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.137857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.138042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.138073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.138256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.138290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.138423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.138454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.138732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.138763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.138879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.138910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.139043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.139074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.139193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.139246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.139492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.139523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.139731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.139762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.139933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.139964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.140157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.140187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.140381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.140415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.140644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.140675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.140846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.140877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.141056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.141086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.141276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.141309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.141511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.141542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.141718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.141748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.141927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.141958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.142082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.142113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.142236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.142269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.142532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.142563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.142707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.142737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.142976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.143007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.143126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.143158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.143289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.174 [2024-12-09 16:00:49.143320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.174 qpair failed and we were unable to recover it. 00:27:54.174 [2024-12-09 16:00:49.143523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.143553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.143810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.143841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.144011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.144041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.144171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.144202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.144397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.144429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.144709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.144741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.144922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.144959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.145142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.145174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.145313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.145346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.145593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.145624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.145807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.145838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.146098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.146129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.146254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.146287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.146414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.146445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.146705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.146736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.146916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.146948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.147214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.147256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.147381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.147412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.147534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.147564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.147804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.147837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.148031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.148064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.148186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.148226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.148421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.148452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.148697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.148729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.148998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.149031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.149260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.149291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.149485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.149515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.149654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.149686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.149821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.149852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.150023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.150055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.150166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.150197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.150355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.150387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.150589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.150621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.150759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.150795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.150977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.151008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.151186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.151227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.151345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.151376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.151614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.151644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.151842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.175 [2024-12-09 16:00:49.151873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.175 qpair failed and we were unable to recover it. 00:27:54.175 [2024-12-09 16:00:49.151984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.152015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.152134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.152165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.152414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.152446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.152633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.152665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.152849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.152880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.153115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.153145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.153336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.153370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.153489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.153519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.153820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.153852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.154019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.154050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.154183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.154214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.154429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.154461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.154634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.154665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.154770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.154801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.154989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.155020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.155136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.155167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.155353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.155385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.155557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.155587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.155766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.155796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.155967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.155998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.156181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.156211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.156415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.156448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.156648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.156681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.156792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.156823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.157012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.157044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.157285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.157319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.157575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.157606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.157779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.157810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.158005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.158037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.158247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.158280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.158410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.158441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.158581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.158612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.158789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.158820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.159078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.159109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.159234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.159268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.159460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.159498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.159689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.159721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.159958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.159990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.176 qpair failed and we were unable to recover it. 00:27:54.176 [2024-12-09 16:00:49.160159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.176 [2024-12-09 16:00:49.160192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.160337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.160369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.160549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.160581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.160760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.160792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.160915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.160946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.161135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.161167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.161298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.161331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.161501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.161532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.161749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.161780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.161955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.161986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.162155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.162186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.162461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.162494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.162682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.162714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.162892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.162924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.163123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.163156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.163275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.163308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.163434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.163466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.163598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.163630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.163805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.163836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.164047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.164077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.164287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.164320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.164511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.164541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.164827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.164859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.165113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.165144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.165261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.165299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.165524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.165554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.165669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.165700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.165955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.165986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.166169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.166200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.166389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.166421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.166679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.166711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.166954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.166985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.167247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.167279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.167454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.167485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.167662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.167693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.167869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.167900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.168086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.177 [2024-12-09 16:00:49.168116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.177 qpair failed and we were unable to recover it. 00:27:54.177 [2024-12-09 16:00:49.168291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.168323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.168570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.168640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.168930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.168966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.169100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.169132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.169351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.169385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.169574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.169606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.169782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.169814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.170007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.170038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.170227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.170260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.170501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.170532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.170713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.170744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.170919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.170950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.171072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.171104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.171233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.171266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.171526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.171572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.171744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.171776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.171906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.171938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.172175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.172206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.172485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.172517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.172752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.172784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.172975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.173006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.173133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.173163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.173358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.173390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.173587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.173617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.173822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.173853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.174047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.174079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.174314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.174347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.174449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.174480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.174605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.174638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.174884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.174915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.175047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.175079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.175277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.175309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.175514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.175544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.175732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.175763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.175885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.175916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.176114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.176145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.176429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.176461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.176643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.176675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.176817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.178 [2024-12-09 16:00:49.176848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.178 qpair failed and we were unable to recover it. 00:27:54.178 [2024-12-09 16:00:49.176967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.176998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.177246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.177278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.177549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.177581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.177865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.177896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.178067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.178098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.178288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.178322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.178541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.178572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.178845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.178876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.179141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.179172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.179443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.179476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.179657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.179687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.179860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.179891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.180156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.180187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.180436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.180507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.180716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.180751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.180956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.180997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.181138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.181170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.181370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.181404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.181541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.181572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.181696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.181727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.181933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.181968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.182243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.182277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.182454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.182486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.182659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.182690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.182857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.182887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.183057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.183088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.183228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.183261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.183503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.183535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.183744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.183775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.183961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.183994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.184166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.184197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.184331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.184363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.184600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.184632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.184894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.184926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.185139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.185170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.185439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.185473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.185684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.185715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.185841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.185873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.186138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.186170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.179 qpair failed and we were unable to recover it. 00:27:54.179 [2024-12-09 16:00:49.186421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.179 [2024-12-09 16:00:49.186454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.186716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.186748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.186860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.186891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.187075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.187107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.187397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.187430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.187693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.187724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.187957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.187989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.188278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.188311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.188490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.188521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.188708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.188740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.188935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.188966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.189083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.189114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.189289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.189321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.189511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.189542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.189663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.189694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.189822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.189853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.190024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.190056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.190323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.190360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.190486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.190516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.190744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.190776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.190948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.190978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.191145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.191176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.191322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.191354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.191558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.191589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.191778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.191809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.192068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.192100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.192370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.192402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.192577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.192609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.192801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.192833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.192954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.192985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.193107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.193139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.193414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.193447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.193722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.193753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.194035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.194067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.194191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.194229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.194429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.194461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.194646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.194677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.194803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.194835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.194949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.194980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.195154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.180 [2024-12-09 16:00:49.195185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.180 qpair failed and we were unable to recover it. 00:27:54.180 [2024-12-09 16:00:49.195363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.195395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.195622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.195654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.195838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.195869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.196072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.196103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.196304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.196337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.196585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.196618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.196788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.196819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.197072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.197103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.197234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.197268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.197456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.197488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.197613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.197644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.197830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.197862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.198044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.198075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.198341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.198373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.198479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.198510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.198632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.198663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.198901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.198932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.199127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.199158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.199407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.199441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.199627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.199658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.199844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.199876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.200140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.200171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.200374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.200407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.200628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.200659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.200828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.200859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.201038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.201069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.201273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.201306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.201498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.201529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.201656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.201686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.201892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.201922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.202172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.202203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.181 [2024-12-09 16:00:49.202342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.181 [2024-12-09 16:00:49.202374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.181 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.202617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.202649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.202772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.202803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.202917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.202948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.203186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.203226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.203422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.203452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.203580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.203611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.203792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.203824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.203989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.204021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.204135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.204166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.204288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.204320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.204603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.204634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.204869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.204901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.205074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.205106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.205275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.205314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.205487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.205519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.205756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.205786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.205906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.205937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.206121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.206152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.206322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.206355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.206559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.206590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.206783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.206814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.206920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.206951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.207215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.207254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.207382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.207415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.207649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.207681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.207852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.207883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.208000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.208031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.208275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.208309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.208576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.208608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.208782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.208812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.209074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.209105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.209344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.209377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.209565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.209596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.209710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.209742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.210002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.210033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.210167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.210198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.210465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.210497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.210639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.210669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.210926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.182 [2024-12-09 16:00:49.210958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.182 qpair failed and we were unable to recover it. 00:27:54.182 [2024-12-09 16:00:49.211138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.211170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.211345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.211377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.211484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.211516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.211777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.211808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.211928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.211959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.212228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.212261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.212393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.212424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.212659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.212690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.212950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.212981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.213260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.213294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.213404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.213436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.213564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.213596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.213768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.213798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.214043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.214075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.214211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.214249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.214425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.214462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.214637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.214669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.214856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.214887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.215070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.215101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.215279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.215312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.215552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.215583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.215784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.215815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.215919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.215950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.216083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.216115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.216306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.216340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.216605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.216636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.216850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.216881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.217003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.217034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.217154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.217186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.217425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.217458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.217638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.217670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.217808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.217839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.217947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.217978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.218101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.218133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.218312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.218344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.218513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.218545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.218725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.218756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.218969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.219000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.219186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.219225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.183 [2024-12-09 16:00:49.219512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.183 [2024-12-09 16:00:49.219544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.183 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.219708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.219738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.219916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.219948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.220184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.220229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.220364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.220397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.220520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.220552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.220807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.220839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.221009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.221040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.221251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.221284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.221478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.221508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.221686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.221717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.221901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.221931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.222032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.222063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.222187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.222227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.222411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.222441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.222678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.222709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.222964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.222996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.223179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.223211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.223421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.223453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.223595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.223626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.223796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.223828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.223953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.223983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.224229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.224262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.224440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.224472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.224663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.224693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.224874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.224906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.225168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.225200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.225459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.225492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.225660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.225691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.225897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.225928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.226039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.226070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.226202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.226245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.226384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.226416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.226580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.226611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.226818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.226850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.227045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.227077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.227246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.227278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.227457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.227488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.227674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.227705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.227940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.184 [2024-12-09 16:00:49.227971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.184 qpair failed and we were unable to recover it. 00:27:54.184 [2024-12-09 16:00:49.228100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.228131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.228370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.228401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.228571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.228603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.228791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.228823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.229090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.229132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.229421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.229453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.229639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.229669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.229859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.229891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.230018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.230050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.230237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.230270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.230535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.230566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.230807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.230838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.230952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.230982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.231161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.231193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.231420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.231452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.231716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.231746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.231943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.231973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.232211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.232255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.232398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.232429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.232625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.232656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.232913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.232945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.233209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.233268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.233453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.233484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.233598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.233629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.233818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.233849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.234051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.234081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.234337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.234370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.234563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.234594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.234765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.234795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.235051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.235081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.235277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.235309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.235497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.235533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.235707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.235738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.235990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.236021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.236198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.236239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.236433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.236464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.236702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.236733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.185 qpair failed and we were unable to recover it. 00:27:54.185 [2024-12-09 16:00:49.236865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.185 [2024-12-09 16:00:49.236896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.237010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.237041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.237210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.237249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.237421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.237452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.237582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.237614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.237747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.237778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.237899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.237930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.238062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.238094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.238320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.238390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.238654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.238690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.238883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.238915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.239155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.239188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.239326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.239358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.239474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.239506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.239621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.239653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.239832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.239864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.240053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.240085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.240199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.240251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.240529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.240561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.240799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.240831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.241012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.241044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.241293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.241335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.241530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.241562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.241676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.241707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.241913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.241944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.242119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.242150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.242346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.242380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.242499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.242530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.242673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.242704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.242889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.242921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.243046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.243076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.243315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.243349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.243614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.243646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.243759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.243790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.243918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.243950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.244133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.244165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.244463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.244496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.186 [2024-12-09 16:00:49.244638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.186 [2024-12-09 16:00:49.244669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.186 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.244845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.244876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.245004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.245036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.245233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.245267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.245460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.245491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.245680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.245712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.245897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.245929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.246123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.246154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.246407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.246440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.246643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.246675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.246864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.246896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.247141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.247174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.247464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.247496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.247600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.247631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.247835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.247868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.247995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.248027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.248269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.248301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.248492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.248524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.248651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.248683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.248942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.248974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.249184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.249225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.249469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.249501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.249642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.249674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.249863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.249894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.250015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.250053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.250167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.250198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.250475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.250507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.250677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.250709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.250905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.250936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.251131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.251164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.251351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.251384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.251578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.251610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.251804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.251836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.252008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.252040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.252149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.252180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.252458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.252491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.252611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.252643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.252816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.252849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.253053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.253085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.253189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.187 [2024-12-09 16:00:49.253230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.187 qpair failed and we were unable to recover it. 00:27:54.187 [2024-12-09 16:00:49.253402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.253434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.253620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.253653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.253781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.253813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.254006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.254037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.254228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.254261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.254475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.254507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.254637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.254668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.254856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.254888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.255065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.255096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.255212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.255256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.255395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.255427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.255610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.255682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.255910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.255946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.256197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.256248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.256380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.256412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.256604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.256636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.256900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.256933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.257062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.257093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.257285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.257317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.257600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.257633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.257756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.257788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.258051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.258082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.258269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.258302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.258491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.258523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.258726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.258766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.258883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.258915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.259209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.259250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.259427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.259458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.259639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.259670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.259788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.259820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.259935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.259965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.260146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.260178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.260448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.260481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.260612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.260644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.260760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.260791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.260908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.260939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.261062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.261093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.261360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.261393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.261568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.188 [2024-12-09 16:00:49.261600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.188 qpair failed and we were unable to recover it. 00:27:54.188 [2024-12-09 16:00:49.261770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.261802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.262083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.262114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.262373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.262406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.262668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.262699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.262834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.262866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.263052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.263083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.263262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.263295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.263509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.263541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.263784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.263815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.264062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.264093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.264227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.264261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.264405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.264436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.264656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.264726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.264878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.264913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.265096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.265128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.265298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.265331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.265449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.265479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.265715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.265747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.266010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.266042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.266159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.266189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.266317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.266354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.266481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.266512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.266647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.266678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.266813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.266846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.267021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.267052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.267236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.267270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.267400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.267432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.267604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.267636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.267809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.267841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.268050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.268082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.268266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.268299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.268503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.268536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.268666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.268698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.268818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.268850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.269116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.269148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.269285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.269320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.269504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.269537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.269650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.269681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.269970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.189 [2024-12-09 16:00:49.270004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.189 qpair failed and we were unable to recover it. 00:27:54.189 [2024-12-09 16:00:49.270251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.270287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.270471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.270502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.270687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.270719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.270851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.270883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.271012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.271044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.271162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.271194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.271466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.271499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.271623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.271655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.271851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.271883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.272119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.272151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.272271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.272305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.272567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.272599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.272728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.272759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.272872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.272911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.273044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.273076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.273266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.273298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.273501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.273532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.273670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.273701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.273806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.273838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.274026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.274058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.274245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.274278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.274411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.274443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.274654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.274685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.274806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.274837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.275012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.275044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.275180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.275212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.275413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.275445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.275625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.275657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.275774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.275806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.275924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.275956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.276129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.276162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.276293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.276327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.276571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.276601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.276789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.276821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.190 [2024-12-09 16:00:49.277015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.190 [2024-12-09 16:00:49.277047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.190 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.277307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.277339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.277542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.277574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.277689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.277721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.277913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.277944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.278185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.278227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.278344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.278377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.278572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.278606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.278861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.278893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.279091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.279123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.279315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.279349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.279534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.279565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.279825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.279857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.279988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.280019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.280196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.280250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.280384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.280415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.280608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.280640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.280743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.280775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.280886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.280918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.281116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.281153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.281334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.281367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.281547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.281578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.281697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.281729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.281854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.281886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.282055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.282087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.282257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.282290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.282395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.282427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.282596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.282627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.282751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.282782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.282974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.283006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.283207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.283248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.283429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.283461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.283573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.283604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.283854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.283886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.284016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.284048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.284314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.284346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.284631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.284662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.284865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.284897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.285071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.285103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.191 [2024-12-09 16:00:49.285366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.191 [2024-12-09 16:00:49.285398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.191 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.285589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.285620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.285729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.285760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.285877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.285908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.286100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.286132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.286383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.286416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.286654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.286686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.286844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.286912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.287128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.287163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.287295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.287329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.287442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.287474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.287653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.287686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.287801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.287832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.287959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.287991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.288116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.288147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.288386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.288419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.288591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.288622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.288735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.288767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.288880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.288911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.289037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.289068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.289250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.289284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.289424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.289456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.289578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.289608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.289803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.289834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.290008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.290040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.290143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.290174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.290374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.290405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.290510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.290541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.290673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.290704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.290900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.290931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.291030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.291060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.291314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.291347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.291537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.291567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.291693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.291724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.291825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.291863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.292110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.292141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.292243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.292275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.292402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.292434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.292604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.292635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.292817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.192 [2024-12-09 16:00:49.292850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.192 qpair failed and we were unable to recover it. 00:27:54.192 [2024-12-09 16:00:49.293026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.293058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.293242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.293275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.293394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.293425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.293622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.293653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.293774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.293805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.294011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.294042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.294157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.294188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.294372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.294404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.294604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.294635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.294844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.294876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.295158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.295189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.295395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.295428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.295609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.295640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.295814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.295846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.296051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.296083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.296261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.296293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.296482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.296514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.296699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.296730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.296898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.296929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.297100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.297131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.297235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.297269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.297486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.297523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.297640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.297669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.297804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.297834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.297996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.298028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.298202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.298243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.298434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.298464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.298579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.298610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.298855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.298887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.299063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.299094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.299265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.299298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.299477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.299509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.299776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.299806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.300018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.300051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.300172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.300203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.300432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.300501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.300693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.300763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.300965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.300999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.301264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.301300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.193 [2024-12-09 16:00:49.301486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.193 [2024-12-09 16:00:49.301517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.193 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.301704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.301735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.301945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.301976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.302153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.302185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.302377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.302422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.302557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.302588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.302697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.302728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.302865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.302896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.303105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.303138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.303322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.303363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.303537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.303568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.303763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.303795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.303991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.304022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.304192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.304234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.304354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.304386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.304625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.304655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.304920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.304952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.305179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.305211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.305433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.305465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.305705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.305736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.305928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.305959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.306079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.306110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.306314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.306347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.306481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.306512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.306780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.306812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.306988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.307019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.307303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.307334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.307468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.307499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.307628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.307659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.307781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.307812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.307935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.307966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.308148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.308180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.308376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.308408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.308597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.308628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.308804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.308835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.309042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.194 [2024-12-09 16:00:49.309073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.194 qpair failed and we were unable to recover it. 00:27:54.194 [2024-12-09 16:00:49.309316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.309386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.309620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.309656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.309861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.309893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.310081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.310113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.310312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.310345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.310488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.310519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.310732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.310763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.311057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.311089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.311191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.311232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.311503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.311534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.311707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.311737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.311870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.311903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.312038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.312069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.312243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.312285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.312404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.312436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.312626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.312657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.312793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.312825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.312998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.313029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.313215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.313261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.313451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.313482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.313664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.313696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.313832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.313863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.314049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.314079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.314293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.314326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.314445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.314476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.314714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.314745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.314939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.314971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.315243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.315275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.315455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.315487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.315687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.315718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.315842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.315874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.316042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.316073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.316193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.316238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.316477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.316508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.316771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.316802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.317042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.317076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.317208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.317251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.317392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.317424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.317547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.195 [2024-12-09 16:00:49.317577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.195 qpair failed and we were unable to recover it. 00:27:54.195 [2024-12-09 16:00:49.317781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.317813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.317951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.317982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.318113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.318144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.318386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.318419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.318607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.318638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.318766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.318797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.318969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.318999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.319261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.319295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.319428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.319460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.319632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.319663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.319835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.319867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.320072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.320103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.320236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.320269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.320454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.320487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.320677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.320715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.320933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.320965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.321095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.321126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.321237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.321270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.321477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.321508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.321693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.321723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.321963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.321994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.322130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.322161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.322389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.322422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.322595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.322627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.322801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.322832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.323022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.323052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.323238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.323272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.323463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.323494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.323617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.323649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.323888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.323918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.324047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.324079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.324192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.324231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.324352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.324385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.324580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.324612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.324733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.324764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.324950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.324983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.325100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.325130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.325322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.325356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.325477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.196 [2024-12-09 16:00:49.325508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.196 qpair failed and we were unable to recover it. 00:27:54.196 [2024-12-09 16:00:49.325749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.325779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.325967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.325998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.326190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.326229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.326473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.326505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.326629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.326661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.326783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.326813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.327079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.327110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.327378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.327410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.327598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.327629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.327891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.327924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.328205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.328246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.328436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.328467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.328652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.328684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.328866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.328899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.329106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.329136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.329318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.329357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.329598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.329630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.329748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.329779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.329891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.329921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.330111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.330142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.330317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.330349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.330464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.330496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.330664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.330696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.330951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.330982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.331167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.331197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.331307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.331339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.331514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.331545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.331761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.331792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.332050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.332081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.332270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.332304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.332492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.332525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.332738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.332771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.333010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.333042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.333305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.333341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.333526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.333558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.333770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.333802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.334015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.334048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.334252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.334284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.334475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.197 [2024-12-09 16:00:49.334509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.197 qpair failed and we were unable to recover it. 00:27:54.197 [2024-12-09 16:00:49.334636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.334667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.334868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.334900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.335072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.335104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.335275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.335344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.335497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.335535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.335717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.335750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.335874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.335907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.336173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.336206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.336470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.336502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.336684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.336716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.336851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.336883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.337130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.337161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.337287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.337319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.337460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.337492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.337699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.337731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.337903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.337935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.338193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.338237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.338383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.338416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.338601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.338634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.338870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.338902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.339035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.339068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.339259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.339291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.339401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.339433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.339643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.339676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.339862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.339894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.340096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.340127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.340367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.340400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.340600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.340633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.340880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.340912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.341023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.341054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.341245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.341279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.341473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.341505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.341690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.341722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.341844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.341876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.342057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.342088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.342368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.342401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.342576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.342608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.342800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.342831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.343078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.198 [2024-12-09 16:00:49.343109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.198 qpair failed and we were unable to recover it. 00:27:54.198 [2024-12-09 16:00:49.343247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.343281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.343412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.343445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.343636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.343668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.343785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.343815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.343999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.344037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.344236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.344279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.344467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.344497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.344732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.344764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.344937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.344968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.345084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.345115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.345305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.345338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.345527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.345560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.345821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.345852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.345965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.345996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.346187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.346228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.346403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.346434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.346611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.346643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.346821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.346853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.347049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.347080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.347321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.347353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.347536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.347568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.347741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.347772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.347887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.347917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.348163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.348196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.348390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.348421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.348593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.348626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.348860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.348891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.349156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.349188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.349386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.349455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.349665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.349702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.349891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.349924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.350063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.350096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.350280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.350314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.350582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.350614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.199 [2024-12-09 16:00:49.350857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.199 [2024-12-09 16:00:49.350889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.199 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.351149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.351180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.351378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.351410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.351670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.351702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.351828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.351859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.352048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.352080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.352194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.352246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.352361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.352392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.352560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.352591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.352787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.352818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.353004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.353047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.353239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.353273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.353464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.353496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.353761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.353793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.353981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.354013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.354272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.354304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.354502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.354533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.354723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.354753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.355018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.355050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.355305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.355337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.355541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.355572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.355830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.355860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.355984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.356015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.356211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.356251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.356389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.356422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.356600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.356632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.356747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.356779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.356979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.357015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.357193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.357234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.357482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.357514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.357686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.357719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.358003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.358036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.358232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.358265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.358451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.358483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.358666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.358698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.358829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.358860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.359074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.359105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.359298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.359332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.359451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.359483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.200 [2024-12-09 16:00:49.359587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.200 [2024-12-09 16:00:49.359618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.200 qpair failed and we were unable to recover it. 00:27:54.201 [2024-12-09 16:00:49.359751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.201 [2024-12-09 16:00:49.359783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.201 qpair failed and we were unable to recover it. 00:27:54.201 [2024-12-09 16:00:49.359992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.201 [2024-12-09 16:00:49.360024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.201 qpair failed and we were unable to recover it. 00:27:54.201 [2024-12-09 16:00:49.360194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.201 [2024-12-09 16:00:49.360233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.201 qpair failed and we were unable to recover it. 00:27:54.201 [2024-12-09 16:00:49.360340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.201 [2024-12-09 16:00:49.360371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.201 qpair failed and we were unable to recover it. 00:27:54.201 [2024-12-09 16:00:49.360495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.201 [2024-12-09 16:00:49.360527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.201 qpair failed and we were unable to recover it. 00:27:54.201 [2024-12-09 16:00:49.360703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.201 [2024-12-09 16:00:49.360734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.201 qpair failed and we were unable to recover it. 00:27:54.201 [2024-12-09 16:00:49.360929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.201 [2024-12-09 16:00:49.360961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.201 qpair failed and we were unable to recover it. 00:27:54.201 [2024-12-09 16:00:49.361230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.201 [2024-12-09 16:00:49.361263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.201 qpair failed and we were unable to recover it. 00:27:54.201 [2024-12-09 16:00:49.361385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.201 [2024-12-09 16:00:49.361417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.201 qpair failed and we were unable to recover it. 00:27:54.201 [2024-12-09 16:00:49.361714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.201 [2024-12-09 16:00:49.361747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.201 qpair failed and we were unable to recover it. 00:27:54.201 [2024-12-09 16:00:49.362011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.201 [2024-12-09 16:00:49.362048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.201 qpair failed and we were unable to recover it. 00:27:54.201 [2024-12-09 16:00:49.362309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.201 [2024-12-09 16:00:49.362341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.201 qpair failed and we were unable to recover it. 00:27:54.201 [2024-12-09 16:00:49.362457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.201 [2024-12-09 16:00:49.362488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.201 qpair failed and we were unable to recover it. 00:27:54.201 [2024-12-09 16:00:49.362670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.201 [2024-12-09 16:00:49.362701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.201 qpair failed and we were unable to recover it. 00:27:54.201 [2024-12-09 16:00:49.362824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.201 [2024-12-09 16:00:49.362855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.201 qpair failed and we were unable to recover it. 00:27:54.485 [2024-12-09 16:00:49.363119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.485 [2024-12-09 16:00:49.363151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.485 qpair failed and we were unable to recover it. 00:27:54.485 [2024-12-09 16:00:49.363366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.485 [2024-12-09 16:00:49.363398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.485 qpair failed and we were unable to recover it. 00:27:54.485 [2024-12-09 16:00:49.363568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.485 [2024-12-09 16:00:49.363600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.485 qpair failed and we were unable to recover it. 00:27:54.485 [2024-12-09 16:00:49.363890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.485 [2024-12-09 16:00:49.363922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.485 qpair failed and we were unable to recover it. 00:27:54.485 [2024-12-09 16:00:49.364166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.485 [2024-12-09 16:00:49.364196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.485 qpair failed and we were unable to recover it. 00:27:54.485 [2024-12-09 16:00:49.364476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.485 [2024-12-09 16:00:49.364509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.485 qpair failed and we were unable to recover it. 00:27:54.485 [2024-12-09 16:00:49.364638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.485 [2024-12-09 16:00:49.364671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.485 qpair failed and we were unable to recover it. 00:27:54.485 [2024-12-09 16:00:49.364912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.364943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.365070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.365103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.365326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.365360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.365532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.365563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.365777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.365809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.366068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.366100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.366276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.366309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.366492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.366524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.366625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.366658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.366842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.366873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.366998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.367030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.367276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.367310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.367509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.367541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.367642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.367675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.367863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.367893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.368154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.368185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.368321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.368354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.368522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.368554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.368794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.368826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.369011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.369042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.369161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.369193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.369393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.369423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.369593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.369623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.369786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.369822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.369948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.369980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.370158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.370188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.370412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.370446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.370631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.370663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.370783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.370821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.370955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.370987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.371198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.371245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.371465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.371499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.371683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.371715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.371905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.371936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.372118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.372149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.372347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.372380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.372623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.372654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.372826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.372857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.486 qpair failed and we were unable to recover it. 00:27:54.486 [2024-12-09 16:00:49.372978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.486 [2024-12-09 16:00:49.373008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.373192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.373231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.373344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.373375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.373547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.373579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.373756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.373788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.374072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.374104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.374292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.374324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.374434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.374465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.374640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.374672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.374917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.374949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.375121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.375153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.375358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.375390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.375525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.375556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.375843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.375875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.376075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.376107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.376299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.376333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.376447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.376479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.376682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.376714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.376897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.376928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.377114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.377145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.377264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.377296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.377469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.377499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.377740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.377771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.378009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.378040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.378209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.378270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.378478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.378509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.378688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.378720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.378980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.379011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.379187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.379228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.379365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.379396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.379576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.379615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.379805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.379837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.380029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.380060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.380250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.380284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.380546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.380576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.380755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.380786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.380958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.380990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.381176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.381208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.381398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.487 [2024-12-09 16:00:49.381430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.487 qpair failed and we were unable to recover it. 00:27:54.487 [2024-12-09 16:00:49.381671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.381703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.381987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.382020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.382232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.382264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.382500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.382532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.382816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.382848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.383038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.383070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.383340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.383373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.383500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.383531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.383706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.383737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.383988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.384019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.384273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.384306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.384487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.384518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.384691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.384722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.384985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.385016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.385183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.385214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.385329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.385360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.385488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.385520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.385624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.385656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.385843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.385875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.386006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.386037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.386160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.386192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.386401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.386433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.386536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.386568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.386755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.386787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.386956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.386988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.387160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.387190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.387330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.387363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.387536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.387568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.387751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.387783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.388002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.388033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.388209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.388251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.388444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.388481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.388666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.388697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.388932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.388963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.389074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.389106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.389293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.389326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.389503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.389535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.389727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.389758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.389937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.488 [2024-12-09 16:00:49.389969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.488 qpair failed and we were unable to recover it. 00:27:54.488 [2024-12-09 16:00:49.390099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.390130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.390372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.390405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.390586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.390617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.390857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.390888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.391001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.391031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.391157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.391189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.391338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.391371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.391613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.391645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.391904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.391935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.392113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.392145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.392269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.392301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.392475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.392506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.392638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.392670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.392775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.392807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.393023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.393054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.393340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.393373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.393559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.393590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.393783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.393815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.393998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.394030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.394165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.394198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.394470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.394502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.394744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.394776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.394947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.394979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.395113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.395144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.395323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.395357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.395532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.395564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.395735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.395767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.396026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.396057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.396246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.396280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.396461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.396493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.396682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.396714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.396996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.397026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.397155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.397194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.397323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.397354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.397590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.397622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.397747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.397779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.397908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.397940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.398204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.398244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.398433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.489 [2024-12-09 16:00:49.398465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.489 qpair failed and we were unable to recover it. 00:27:54.489 [2024-12-09 16:00:49.398588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.398619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.398794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.398825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.399080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.399111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.399251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.399284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.399395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.399426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.399618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.399650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.399914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.399946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.400194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.400233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.400345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.400376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.400558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.400593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.400722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.400754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.400964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.400996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.401169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.401200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.401397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.401428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.401616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.401648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.401835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.401867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.402056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.402087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.402264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.402297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.402469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.402507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.402617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.402649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.402899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.402970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.403173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.403208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.403419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.403452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.403563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.403595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.403860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.403891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.404078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.404110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.404292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.404325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.404533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.404564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.404749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.404781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.404963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.404995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.405117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.405148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.405331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.405363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.405613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.490 [2024-12-09 16:00:49.405644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.490 qpair failed and we were unable to recover it. 00:27:54.490 [2024-12-09 16:00:49.405777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.405818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.406003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.406034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.406152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.406182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.406384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.406417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.406596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.406628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.406828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.406860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.407040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.407071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.407261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.407295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.407472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.407503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.407689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.407721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.407911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.407942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.408115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.408146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.408280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.408313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.408514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.408552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.408799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.408829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.409011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.409042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.409335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.409368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.409498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.409530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.409709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.409740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.409909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.409942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.410154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.410186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.410366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.410399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.410562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.410593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.410786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.410817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.410991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.411023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.411193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.411231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.411474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.411506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.411617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.411648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.411818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.411847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.411956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.411988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.412235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.412267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.412450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.412481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.412665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.412697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.412815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.412846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.413053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.413085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.413271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.413303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.413543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.413575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.413759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.413790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.413983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.491 [2024-12-09 16:00:49.414015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.491 qpair failed and we were unable to recover it. 00:27:54.491 [2024-12-09 16:00:49.414291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.414324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.414540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.414577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.414826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.414857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.415139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.415172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.415367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.415398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.415582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.415614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.415826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.415856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.415972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.416004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.416201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.416240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.416364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.416396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.416594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.416625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.416844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.416876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.417049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.417079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.417279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.417313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.417551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.417586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.417788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.417820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.417940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.417973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.418107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.418139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.418408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.418442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.418632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.418663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.418785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.418818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.418935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.418968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.419249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.419283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.419540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.419573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.419890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.419923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.420105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.420137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.420270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.420304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.420416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.420448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.420691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.420723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.420899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.420929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.421167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.421200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.421338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.421369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.421475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.421507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.421719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.421750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.421994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.422027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.422239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.422271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.422461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.422493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.422691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.422723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.422969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.492 [2024-12-09 16:00:49.423001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.492 qpair failed and we were unable to recover it. 00:27:54.492 [2024-12-09 16:00:49.423249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.423282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.423483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.423516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.423787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.423825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.423998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.424030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.424242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.424275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.424448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.424480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.424746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.424778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.424964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.424994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.425179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.425212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.425484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.425516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.425708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.425740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.425926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.425958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.426076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.426107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.426324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.426357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.426603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.426635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.426749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.426780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.427068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.427100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.427269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.427303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.427423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.427455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.427639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.427672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.427865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.427895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.428080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.428112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.428286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.428320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.428510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.428541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.428711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.428743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.428845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.428876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.428990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.429022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.429195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.429236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.429504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.429535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.429646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.429677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.429938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.429969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.430210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.430256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.430458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.430490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.430606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.430638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.430902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.430934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.431121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.431153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.431391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.431424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.431615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.431648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.431769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.493 [2024-12-09 16:00:49.431802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.493 qpair failed and we were unable to recover it. 00:27:54.493 [2024-12-09 16:00:49.431986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.432018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.432133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.432164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.432288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.432321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.432506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.432549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.432793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.432825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.433011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.433043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.433154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.433186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.433331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.433364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.433552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.433585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.433763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.433795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.433998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.434030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.434149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.434179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.434287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.434320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.434547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.434579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.434765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.434798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.434999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.435031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.435240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.435276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.435520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.435552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.435754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.435786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.436022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.436054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.436156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.436188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.436379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.436412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.436543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.436576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.436763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.436794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.436923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.436956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.437141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.437173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.437418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.437451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.437649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.437681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.437804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.437836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.437960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.437993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.438184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.438244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.438373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.438405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.438524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.438555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.438671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.438701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.494 [2024-12-09 16:00:49.438949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.494 [2024-12-09 16:00:49.438982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.494 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.439165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.439196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.439417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.439450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.439692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.439724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.439922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.439953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.440153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.440185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.440309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.440340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.440454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.440486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.440665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.440698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.440980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.441017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.441280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.441314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.441497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.441528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.441792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.441824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.442109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.442140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.442380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.442413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.442526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.442558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.442769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.442801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.442985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.443017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.443237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.443269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.443536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.443568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.443784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.443817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.444025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.444056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.444243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.444276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.444472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.444504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.444628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.444660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.444763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.444795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.444964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.444996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.445179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.445211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.445482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.445513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.445638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.445670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.445801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.445832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.446016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.446048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.446294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.446329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.446503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.446534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.446772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.446804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.446919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.446951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.447079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.447110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.447292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.447325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.447448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.447481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.495 [2024-12-09 16:00:49.447604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.495 [2024-12-09 16:00:49.447636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.495 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.447808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.447840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.448015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.448047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.448251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.448306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.448490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.448522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.448765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.448800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.448969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.449000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.449242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.449275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.449462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.449493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.449678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.449710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.449880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.449911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.450157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.450190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.450426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.450459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.450568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.450601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.450726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.450757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.450971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.451003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.451131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.451163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.451432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.451465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.451568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.451600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.451724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.451756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.451895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.451927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.452048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.452081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.452337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.452371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.452625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.452657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.452775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.452807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.452901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.452934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.453063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.453095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.453372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.453406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.453589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.453621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.453888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.453920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.454041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.454074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.454244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.454277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.454562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.454595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.454786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.454818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.454937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.454969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.455087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.455119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.455315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.455348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.455451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.455488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.455627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.455659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.496 [2024-12-09 16:00:49.455923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.496 [2024-12-09 16:00:49.455956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.496 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.456082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.456114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.456294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.456327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.456517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.456549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.456740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.456772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.457037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.457069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.457189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.457230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.457437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.457467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.457649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.457681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.457787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.457819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.457948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.457980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.458247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.458280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.458472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.458504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.458780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.458812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.459067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.459098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.459283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.459316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.459444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.459477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.459670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.459702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.459914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.459946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.460079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.460111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.460375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.460407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.460583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.460615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.460808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.460840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.461012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.461043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.461297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.461330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.461509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.461541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.461676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.461709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.461840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.461870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.461989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.462021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.462283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.462316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.462582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.462614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.462735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.462767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.462901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.462933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.463114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.463146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.463321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.463354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.463565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.463597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.463807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.463839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.464022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.464052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.464232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.464272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.464396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.497 [2024-12-09 16:00:49.464429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.497 qpair failed and we were unable to recover it. 00:27:54.497 [2024-12-09 16:00:49.464614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.464646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.464836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.464867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.465046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.465078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.465262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.465296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.465532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.465564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.465694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.465726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.465966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.465997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.466198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.466249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.466432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.466465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.466703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.466735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.466928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.466960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.467080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.467112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.467386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.467418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.467604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.467636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.467874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.467906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.468090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.468122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.468248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.468281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.468470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.468502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.468672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.468705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.468878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.468909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.469138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.469170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.469349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.469383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.469501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.469532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.469656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.469688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.469871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.469903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.470080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.470111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.470353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.470386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.470576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.470608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.470879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.470911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.471169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.471201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.471350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.471384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.471502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.471534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.471652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.471683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.471812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.471844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.471952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.471983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.472153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.472184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.472312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.472344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.472609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.472641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.472762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.498 [2024-12-09 16:00:49.472799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.498 qpair failed and we were unable to recover it. 00:27:54.498 [2024-12-09 16:00:49.472979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.473012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.473194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.473235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.473439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.473471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.473678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.473710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.473883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.473915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.474126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.474159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.474299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.474332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.474525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.474557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.474735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.474767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.474950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.474982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.475106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.475138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.475380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.475413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.475593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.475625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.475900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.475931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.476119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.476152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.476346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.476380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.476649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.476681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.476817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.476849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.477033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.477066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.477277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.477310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.477505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.477537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.477720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.477752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.477990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.478021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.478289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.478323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.478514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.478546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.478766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.478798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.479096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.479129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.479369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.479405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.479588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.479620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.479791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.479823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.479998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.480029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.480163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.480196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.480315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.480346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.480540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.480572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.499 [2024-12-09 16:00:49.480705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.499 [2024-12-09 16:00:49.480738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.499 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.480980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.481013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.481186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.481240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.481366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.481398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.481581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.481614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.481803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.481840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.482041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.482072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.482197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.482251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.482379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.482412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.482686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.482719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.482912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.482944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.483132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.483164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.483352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.483386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.483603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.483636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.483842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.483874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.484010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.484042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.484174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.484205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.484453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.484485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.484592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.484624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.484743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.484774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.484946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.484979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.485154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.485186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.485375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.485409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.485528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.485560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.485681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.485713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.485897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.485930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.486046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.486078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.486269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.486302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.486477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.486510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.486773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.486805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.486940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.486971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.487162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.487194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.487382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.487415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.487585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.487617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.487864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.487897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.488011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.488043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.488282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.488313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.488499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.488532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.488659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.488692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.500 [2024-12-09 16:00:49.488808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.500 [2024-12-09 16:00:49.488840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.500 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.489079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.489113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.489355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.489389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.489575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.489608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.489713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.489746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.489925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.489957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.490193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.490262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.490379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.490412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.490544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.490577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.490782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.490813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.490915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.490947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.491136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.491168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.491369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.491403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.491524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.491556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.491757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.491790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.491976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.492008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.492194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.492236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.492367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.492400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.492582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.492615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.492797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.492828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.493010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.493043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.493248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.493283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.493409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.493441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.493637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.493670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.493842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.493875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.493983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.494015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.494186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.494226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.494357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.494390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.494570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.494602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.494789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.494820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.494959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.494992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.495164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.495197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.495333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.495366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.495497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.495531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.495706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.495739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.495990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.496022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.496143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.496176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.496310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.496349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.496453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.496486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.501 [2024-12-09 16:00:49.496598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.501 [2024-12-09 16:00:49.496631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.501 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.496841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.496875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.497118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.497151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.497286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.497323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.497587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.497620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.497798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.497830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.498029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.498061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.498191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.498240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.498418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.498451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.498584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.498616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.498793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.498825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.498951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.498983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.499182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.499214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.499333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.499366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.499477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.499510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.499700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.499732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.499908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.499940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.500061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.500093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.500271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.500304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.500416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.500449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.500636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.500670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.500800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.500831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.501126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.501159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.501348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.501381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.501493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.501525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.501632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.501664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.501926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.501958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.502085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.502117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.502309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.502347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.502540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.502573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.502745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.502777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.502883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.502914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.503115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.503147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.503369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.503402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.503532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.503565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.503772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.503803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.503986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.504018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.504154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.504187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.504349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.504383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.502 [2024-12-09 16:00:49.504662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.502 [2024-12-09 16:00:49.504695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.502 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.504816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.504848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.505035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.505065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.505252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.505286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.505394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.505426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.505542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.505574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.505687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.505720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.505840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.505872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.505992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.506030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.506330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.506363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.506477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.506510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.506755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.506787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.506958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.506989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.507172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.507206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.507367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.507399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.507521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.507554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.507743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.507776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.507894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.507929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.508129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.508162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.508360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.508395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.508567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.508600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.508720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.508752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.508934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.508966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.509136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.509168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.509456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.509490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.509665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.509697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.509822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.509853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.509984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.510015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.510151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.510183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.510374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.510407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.510595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.510628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.510813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.510845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.510949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.510980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.511147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.511181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.511470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.511504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.503 qpair failed and we were unable to recover it. 00:27:54.503 [2024-12-09 16:00:49.511682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.503 [2024-12-09 16:00:49.511714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.511904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.511937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.512054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.512086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.512209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.512268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.512371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.512402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.512534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.512566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.512758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.512790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.512977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.513012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.513200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.513247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.513368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.513399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.513522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.513555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.513724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.513756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.513934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.513966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.514152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.514188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.514311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.514346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.514451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.514484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.514593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.514625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.514809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.514841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.514959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.514991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.515120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.515152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.515259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.515292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.515401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.515434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.515558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.515589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.515725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.515756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.515937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.515969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.516096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.516128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.516307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.516341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.516470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.516502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.516677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.516710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.516836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.516868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.517039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.517071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.517181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.517212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.517426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.517459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.517645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.517679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.517895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.517927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.518041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.518074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.518184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.518226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.518432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.518464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.518566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.518598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.504 qpair failed and we were unable to recover it. 00:27:54.504 [2024-12-09 16:00:49.518699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.504 [2024-12-09 16:00:49.518730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.518861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.518893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.519000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.519033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.519160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.519192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.519406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.519438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.519708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.519741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.519936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.519969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.520214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.520257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.520365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.520397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.520657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.520689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.520798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.520829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.520958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.520990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.521169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.521200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.521483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.521515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.521689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.521727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.521916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.521947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.522067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.522100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.522233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.522266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.522394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.522426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.522540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.522572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.522689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.522720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.522915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.522947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.523184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.523226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.523405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.523437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.523566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.523598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.523708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.523742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.523862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.523895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.524069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.524101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.524229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.524264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.524379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.524411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.524525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.524557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.524659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.524690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.524821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.524853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.525026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.525058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.525270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.525304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.525491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.525523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.525643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.525675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.525782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.525814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.525932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.525964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.505 qpair failed and we were unable to recover it. 00:27:54.505 [2024-12-09 16:00:49.526133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.505 [2024-12-09 16:00:49.526166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.526361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.526393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.526571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.526644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.526840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.526875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.527053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.527086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.527213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.527265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.527447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.527479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.527722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.527753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.527995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.528026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.528147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.528178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.528371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.528403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.528507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.528539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.528801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.528832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.528968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.529000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.529183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.529214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.529416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.529449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.529630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.529662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.529924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.529956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.530137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.530170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.530301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.530333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.530578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.530609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.530728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.530759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.530951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.530982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.531165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.531196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.531388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.531419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.531602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.531634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.531737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.531768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.531955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.531986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.532122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.532154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.532352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.532391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.532564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.532594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.532724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.532756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.532875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.532905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.533079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.533110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.533237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.533270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.533459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.533491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.533624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.533655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.533824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.533856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.533969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.534000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.506 [2024-12-09 16:00:49.534181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.506 [2024-12-09 16:00:49.534212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.506 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.534396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.534447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.534548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.534579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.534697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.534729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.534844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.534875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.534988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.535019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.535256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.535289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.535462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.535494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.535619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.535651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.535845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.535876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.536063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.536094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.536286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.536320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.536445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.536478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.536600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.536631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.536768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.536803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.536980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.537012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.537116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.537146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.537266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.537299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.537414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.537447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.537619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.537650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.537872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.537904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.538022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.538053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.538193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.538235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.538360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.538392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.538507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.538539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.538741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.538772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.538945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.538977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.539215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.539255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.539516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.539547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.539725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.539755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.539968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.540000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.540255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.540294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.540483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.540516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.540701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.540733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.540929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.540961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.541086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.541118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.541317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.541351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.541457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.541489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.507 qpair failed and we were unable to recover it. 00:27:54.507 [2024-12-09 16:00:49.541612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.507 [2024-12-09 16:00:49.541644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.541761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.541793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.541996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.542029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.542152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.542183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.542321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.542352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.542466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.542499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.542678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.542718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.542835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.542866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.542987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.543019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.543131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.543163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.543345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.543379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.543487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.543519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.543635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.543667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.543800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.543831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.543937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.543969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.544148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.544180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.544310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.544343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.544606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.544638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.544809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.544842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.544965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.544996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.545122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.545154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.545342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.545377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.545480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.545512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.545714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.545746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.545872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.545903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.546013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.546044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.546259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.546292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.546423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.546455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.546575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.546607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.546785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.546816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.546925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.546956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.547078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.547110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.547240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.547273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.547462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.547498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.547622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.547652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.547753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.547784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.547956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.508 [2024-12-09 16:00:49.547987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.508 qpair failed and we were unable to recover it. 00:27:54.508 [2024-12-09 16:00:49.548183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.548216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.548338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.548370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.548559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.548592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.548774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.548806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.548987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.549019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.549193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.549234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.549350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.549381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.549488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.549519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.549700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.549731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.549901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.549932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.550136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.550169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.550326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.550359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.550529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.550560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.550751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.550781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.550971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.551003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.551107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.551139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.551257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.551291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.551401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.551433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.551555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.551585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.551723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.551754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.551956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.551988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.552249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.552301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.552418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.552449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.552586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.552631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.552813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.552843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.553026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.553059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.553162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.553193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.553304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.553336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.553504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.553535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.553662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.553694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.553823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.553854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.553961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.553993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.554227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.554260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.554367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.554400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.554501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.554531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.554654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.554685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.554807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.554838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.555018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.555050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.555254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.555288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.509 [2024-12-09 16:00:49.555402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.509 [2024-12-09 16:00:49.555434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.509 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.555543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.555575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.555755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.555785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.555956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.555987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.556113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.556145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.556356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.556390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.556566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.556598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.556802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.556833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.556949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.556981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.557104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.557136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.557264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.557297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.557420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.557451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.557592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.557623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.557794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.557825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.557940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.557971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.558075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.558108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.558232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.558265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.558445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.558476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.558603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.558634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.558740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.558772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.558891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.558921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.559097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.559129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.559423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.559457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.559634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.559666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.559782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.559813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.560011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.560049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.560170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.560201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.560396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.560427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.560551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.560583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.560707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.560737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.560855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.560887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.561037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.561069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.561243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.561276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.561404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.561435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.561610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.561642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.561814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.561846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.562050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.562081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.562270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.562303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.562423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.562454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.562703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.510 [2024-12-09 16:00:49.562735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.510 qpair failed and we were unable to recover it. 00:27:54.510 [2024-12-09 16:00:49.562909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.562939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.563058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.563090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.563205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.563256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.563387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.563419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.563530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.563561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.563757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.563788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.563896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.563926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.564037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.564069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.564179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.564211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.564346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.564377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.564552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.564584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.564756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.564787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.564960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.564997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.565168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.565199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.565453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.565485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.565669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.565699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.565830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.565862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.565976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.566006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.566202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.566245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.566362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.566394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.566508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.566539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.566720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.566751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.566933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.566964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.567094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.567124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.567237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.567270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.567443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.567474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.567613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.567644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.567824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.567856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.567961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.567993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.568086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.568117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.568239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.568272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.568406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.568437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.568551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.568582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.568699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.568730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.568909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.568940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.569060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.569090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.569194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.569232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.569404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.569435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.569538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.569569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.511 [2024-12-09 16:00:49.569752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.511 [2024-12-09 16:00:49.569783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.511 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.569906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.569937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.570050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.570081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.570187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.570226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.570360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.570391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.570525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.570557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.570729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.570760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.570881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.570912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.571018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.571049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.571152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.571183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.571327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.571361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.571470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.571501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.571624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.571657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.571833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.571865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.571969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.572006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.572130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.572162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.572346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.572380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.572562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.572593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.572723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.572754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.572871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.572903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.573008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.573040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.573161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.573192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.573379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.573410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.573590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.573621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.573822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.573853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.574035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.574066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.574246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.574280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.574396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.574427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.574556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.574588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.574706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.574737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.574845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.574877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.575049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.575080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.575193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.575232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.575350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.575381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.575488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.575520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.512 [2024-12-09 16:00:49.575782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.512 [2024-12-09 16:00:49.575813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.512 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.575924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.575956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.576126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.576157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.576408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.576440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.576552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.576584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.576790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.576821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.576993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.577029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.577144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.577175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.577298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.577330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.577434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.577465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.577576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.577607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.577828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.577859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.577984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.578016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.578123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.578154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.578269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.578302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.578473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.578504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.578632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.578662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.578833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.578865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.578974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.579005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.579112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.579143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.579380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.579449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.579594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.579630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.579808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.579840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.579952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.579985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.580097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.580128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.580254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.580289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.580402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.580433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.580557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.580588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.580773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.580805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.580930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.580962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.581181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.581213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.581362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.581394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.581508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.581539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.581657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.581698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.581803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.581834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.582035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.582067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.582181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.582211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.582389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.582421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.582601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.513 [2024-12-09 16:00:49.582633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.513 qpair failed and we were unable to recover it. 00:27:54.513 [2024-12-09 16:00:49.582739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.582770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.582895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.582926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.583111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.583143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.583381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.583413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.583604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.583636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.583811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.583843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.583947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.583978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.584087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.584119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.584316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.584350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.584472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.584502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.584677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.584709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.584830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.584861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.584978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.585009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.585136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.585167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.585290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.585322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.585507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.585539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.585777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.585810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.585996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.586027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.586129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.586160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.586398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.586432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.586538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.586569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.586753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.586793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.586938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.586970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.587190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.587232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.587347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.587380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.587640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.587672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.587776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.587808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.587914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.587946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.588132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.588164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.588358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.588389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.588496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.588528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.588651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.588682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.588852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.588902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.589077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.589107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.514 [2024-12-09 16:00:49.589210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.514 [2024-12-09 16:00:49.589251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.514 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.589471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.589502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.589688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.589718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.589840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.589871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.590043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.590075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.590184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.590215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.590408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.590439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.590705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.590736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.590847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.590879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.590985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.591017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.591144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.591175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.591297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.591328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.591448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.591480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.591618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.591649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.591752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.591788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.591962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.591993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.592180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.592212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.592340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.592372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.592475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.592507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.592684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.592716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.592823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.592855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.592995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.593026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.593129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.593162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.593342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.593374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.593489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.593521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.593697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.593728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.593988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.594021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.594262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.594295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.594431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.594463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.594637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.594668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.594787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.594817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.595000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.595033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.595231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.595264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.595371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.595403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.595530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.595562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.595668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.595700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.595813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.595845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.595966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.595997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.515 [2024-12-09 16:00:49.596174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.515 [2024-12-09 16:00:49.596205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.515 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.596335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.596367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.596502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.596533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.596714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.596746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.596864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.596897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.597009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.597040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.597155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.597186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.597446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.597478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.597591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.597622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.597795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.597826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.598000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.598030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.598160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.598192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.598388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.598420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.598541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.598573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.598752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.598783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.598956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.598988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.599092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.599122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.599320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.599355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.599474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.599506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.599677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.599708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.599834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.599865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.599989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.600021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.600231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.600265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.600438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.600469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.600591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.600622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.600734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.600765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.600893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.600925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.601099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.601131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.601307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.601339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.601444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.601474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.601586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.601624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.601744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.601774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.601996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.602028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.602149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.602180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.602372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.516 [2024-12-09 16:00:49.602405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.516 qpair failed and we were unable to recover it. 00:27:54.516 [2024-12-09 16:00:49.602590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.602622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.602792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.602822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.602945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.602978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.603146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.603177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.603293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.603325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.603526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.603557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.603677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.603709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.603854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.603885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.603989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.604019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.604146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.604177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.604371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.604404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.604587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.604618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.604788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.604819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.604925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.604956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.605075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.605106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.605228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.605260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.605378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.605409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.605520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.605550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.605660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.605690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.605802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.605833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.606026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.606058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.606173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.606203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.606417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.606454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.606625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.606657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.606771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.606802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.606911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.606942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.607074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.607106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.607285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.607316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.607421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.607452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.607623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.607654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.607793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.607825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.608025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.608055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.608236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.608268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.517 [2024-12-09 16:00:49.608402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.517 [2024-12-09 16:00:49.608434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.517 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.608569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.608599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.608722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.608753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.608861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.608892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.609017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.609048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.609296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.609329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.609499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.609530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.609699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.609729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.609855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.609886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.609994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.610025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.610145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.610176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.610449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.610482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.610683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.610713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.610831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.610862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.611041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.611072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.611184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.611227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.611413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.611449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.611585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.611615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.611790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.611821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.612011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.612043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.612149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.612179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.612316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.612348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.612528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.612558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.612748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.612780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.612893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.612925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.613095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.613126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.613376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.613410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.613540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.613570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.613691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.613731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.613850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.613881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.614058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.614089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.614213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.614254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.614378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.614410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.614518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.614550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.614757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.614790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.518 [2024-12-09 16:00:49.614979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.518 [2024-12-09 16:00:49.615012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.518 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.615193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.615235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.615356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.615388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.615495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.615527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.615648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.615680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.615787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.615819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.615950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.615980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.616228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.616260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.616372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.616404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.616570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.616602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.616739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.616770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.616884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.616914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.617047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.617078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.617312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.617346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.617462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.617493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.617665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.617695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.617802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.617834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.617942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.617973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.618101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.618133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.618331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.618364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.618470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.618500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.618617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.618649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.618770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.618806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.618930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.618961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.619084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.619115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.619238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.619271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.619453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.619484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.619612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.619650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.619766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.619797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.619922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.619953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.620059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.620090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.620195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.620235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.620345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.620376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.620547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.620578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.620764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.620796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.620982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.621012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.621118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.621150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.621263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.621296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.621496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.621527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.621646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.519 [2024-12-09 16:00:49.621677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.519 qpair failed and we were unable to recover it. 00:27:54.519 [2024-12-09 16:00:49.621777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.621808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.621914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.621945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.622049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.622079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.622272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.622305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.622422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.622453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.622555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.622585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.622687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.622719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.622885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.622916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.623028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.623060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.623176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.623214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.623361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.623392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.623498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.623529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.623645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.623676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.623782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.623814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.623913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.623944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.624056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.624086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.624266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.624298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.624413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.624443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.624557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.624589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.624761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.624792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.624969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.625001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.625127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.625158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.625339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.625372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.625482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.625513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.625626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.625658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.625771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.625802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.625939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.625971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.626087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.626119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.626290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.626323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.626578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.626610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.626718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.626746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.626854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.626883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.627058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.627086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.627184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.627212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.627352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.520 [2024-12-09 16:00:49.627381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.520 qpair failed and we were unable to recover it. 00:27:54.520 [2024-12-09 16:00:49.627514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.627542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.627652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.627679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.627798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.627827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.627926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.627955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.628054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.628082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.628260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.628289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.628413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.628441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.628549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.628578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.628691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.628719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.628815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.628844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.628964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.628992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.629098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.629126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.629291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.629321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.629437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.629465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.629567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.629596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.629691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.629723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.629850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.629878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.630115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.630148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.630263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.630296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.630473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.630504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.630626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.630657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.630766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.630794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.630911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.630939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.631112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.631140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.631333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.631363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.631478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.631506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.631604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.631632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.631731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.631760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.631924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.631952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.632060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.632089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.632263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.632293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.632404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.632431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.632596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.632625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.632744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.521 [2024-12-09 16:00:49.632772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.521 qpair failed and we were unable to recover it. 00:27:54.521 [2024-12-09 16:00:49.632879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.632907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.633016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.633044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.633137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.633165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.633286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.633317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.633492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.633520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.633697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.633726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.633893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.633921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.634094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.634123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.634240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.634276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.634439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.634468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.634638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.634668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.634856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.634887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.634990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.635022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.635135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.635166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.635364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.635396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.635577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.635607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.635710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.635738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.635848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.635877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.636058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.636086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.636247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.636276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.636389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.636417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.636536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.636565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.636756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.636826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.637050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.637086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.637240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.637276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.637461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.637493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.637666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.637697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.637937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.637968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.638156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.638187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.638376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.638408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.638520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.638551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.638678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.638709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.638811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.638842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.639027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.639058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.639175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.522 [2024-12-09 16:00:49.639206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.522 qpair failed and we were unable to recover it. 00:27:54.522 [2024-12-09 16:00:49.639345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.639387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.639502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.639533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.639649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.639679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.639796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.639827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.639932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.639963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.640153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.640184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.640440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.640472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.640653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.640684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.640809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.640841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.641030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.641061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.641184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.641215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.641339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.641371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.641494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.641526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.641634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.641665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.641845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.641877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.641983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.642014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.642119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.642150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.642398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.642431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.642615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.642647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.642831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.642862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.642980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.643012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.643115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.643146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.643329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.643362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.643543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.643575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.643700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.643732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.643851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.643882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.643995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.644028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.644144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.644180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.644437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.644470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.644580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.644611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.644748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.644777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.644947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.644976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.645147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.645176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.645298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.645328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.645502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.645530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.645631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.645659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.645914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.645942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.646119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.646148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.646255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.523 [2024-12-09 16:00:49.646286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.523 qpair failed and we were unable to recover it. 00:27:54.523 [2024-12-09 16:00:49.646413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.646440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.646614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.646644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.646759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.646787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.646919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.646952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.647085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.647117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.647252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.647287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.647459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.647490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.647602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.647633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.647764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.647797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.647982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.648014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.648131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.648163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.648345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.648377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.648557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.648588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.648703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.648735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.648925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.648955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.649132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.649170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.649290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.649322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.649500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.649531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.649766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.649798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.649930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.649962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.650149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.650179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.650329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.650361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.650563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.650595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.650766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.650797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.650904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.650935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.651132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.651164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.651366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.651398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.651637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.651667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.651774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.651807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.651992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.652023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.652145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.652177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.652374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.652407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.652590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.652621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.524 [2024-12-09 16:00:49.652733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.524 [2024-12-09 16:00:49.652764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.524 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.652938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.652970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.653160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.653191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.653390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.653423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.653595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.653627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.653820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.653852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.653965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.653995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.654166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.654198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.654409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.654441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.654625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.654657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.654782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.654813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.655006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.655037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.655208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.655251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.655372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.655403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.655514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.655544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.655715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.655746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.655868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.655899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.656019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.656051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.656230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.656262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.656382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.656414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.656531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.656562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.656676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.656708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.656825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.656856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.656975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.657007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.657127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.657158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.657336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.657370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.657633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.657664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.657851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.657882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.658000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.658032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.658233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.658267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.658403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.658434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.658550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.658582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.658771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.658803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.658937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.658968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.659150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.659182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.659369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.525 [2024-12-09 16:00:49.659401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.525 qpair failed and we were unable to recover it. 00:27:54.525 [2024-12-09 16:00:49.659502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.659534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.659662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.659694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.659946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.659979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.660185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.660214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.660346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.660378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.660564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.660597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.660776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.660807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.660998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.661030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.661207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.661250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.661355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.661386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.661491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.661522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.661696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.661728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.661928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.661959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.662081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.662112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.662246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.662284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.662417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.662447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.662635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.662666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.662838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.662869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.662999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.663030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.663161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.663192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.663304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.663335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.663516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.663547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.663730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.663762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.663894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.663924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.664041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.664073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.664194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.664249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.664429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.664460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.664700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.664732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.664876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.664907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.665084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.665115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.665237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.665271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.665375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.665406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.665524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.665555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.665728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.665760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.526 qpair failed and we were unable to recover it. 00:27:54.526 [2024-12-09 16:00:49.665859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.526 [2024-12-09 16:00:49.665889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.666128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.666159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.666356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.666389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.666511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.666543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.666662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.666692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.666873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.666905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.667042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.667073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.667267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.667300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.667493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.667524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.667648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.667680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.667854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.667886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.668012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.668044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.668231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.668264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.668440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.668471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.668652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.668681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.668810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.668841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.669023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.669054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.669161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.669191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.669317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.669349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.669464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.669495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.669616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.669647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.669819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.669891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.670040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.670076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.670190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.670240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.670369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.670400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.670575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.670606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.670711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.670743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.670944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.670975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.671096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.671128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.671318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.671352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.671495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.671527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.671703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.671735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.671914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.671945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.672064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.672096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.672206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.672260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.672387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.672419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.672530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.672562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.672679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.672711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.672829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.672861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.527 qpair failed and we were unable to recover it. 00:27:54.527 [2024-12-09 16:00:49.672980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.527 [2024-12-09 16:00:49.673012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.673264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.673298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.673414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.673445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.673704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.673736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.673869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.673900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.674111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.674143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.674317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.674348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.674467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.674498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.674631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.674661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.674912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.674944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.675124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.675156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.675276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.675309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.675491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.675523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.675646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.675678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.675875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.675907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.676020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.676052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.676174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.676205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.676320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.676352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.676522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.676553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.676725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.676754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.676924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.676955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.677067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.677097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.677202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.677251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.677427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.677459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.677575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.677608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.677786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.677818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.677942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.677975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.678081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.678113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.678235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.678268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.678390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.678422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.678546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.678578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.678747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.678778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.678894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.678926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.679052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.679084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.679193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.679235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.679366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.679398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.679571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.679603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.679707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.679738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.679969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.680000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.680169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.680201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.680332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.680364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.680473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.680504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.528 [2024-12-09 16:00:49.680683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.528 [2024-12-09 16:00:49.680715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.528 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.680883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.680914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.681102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.681134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.681307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.681340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.681467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.681498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.681615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.681646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.681769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.681799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.681975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.682008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.682116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.682147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.682266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.682299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.682487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.682519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.682634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.682667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.682795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.682826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.682995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.683027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.683136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.683167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.683380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.683412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.683622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.683655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.683766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.683797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.683928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.683958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.684094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.684126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.684245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.684284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.684468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.684499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.684620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.684652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.684917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.684948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.685068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.685100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.685214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.685343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.685526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.685558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.685730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.685762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.685886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.685917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.686037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.686070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.686183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.686214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.686347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.686379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.686559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.686591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.686711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.686742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.686888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.686920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.687038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.687070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.687186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.687228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.687342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.687374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.687578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.687611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.687787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.687820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.529 [2024-12-09 16:00:49.687924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.529 [2024-12-09 16:00:49.687956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.529 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.688066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.688098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.688273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.688306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.688423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.688455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.688625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.688657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.688781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.688812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.688914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.688947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.689055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.689087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.689344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.689377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.689563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.689594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.689714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.689745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.689851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.689881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.690049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.690080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.690258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.690290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.690396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.690428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.690525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.690556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.690819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.690850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.690958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.690990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.691113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.691144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.691266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.691299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.691405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.691443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.691548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.691580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.691778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.691809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.691912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.691943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.692077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.692109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.692287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.692320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.692563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.692594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.692713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.692744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.692923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.692955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.693074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.693105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.693277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.693310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.693447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.693478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.693593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.693624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.693826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.693857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.814 qpair failed and we were unable to recover it. 00:27:54.814 [2024-12-09 16:00:49.693975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.814 [2024-12-09 16:00:49.694006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.694175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.694207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.694322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.694355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.694472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.694502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.694762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.694793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.694896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.694927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.695210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.695249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.695362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.695394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.695502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.695534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.695636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.695667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.695803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.695835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.696016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.696048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.696247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.696281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.696407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.696439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.696542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.696574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.696695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.696726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.696842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.696874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.696990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.697021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.697124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.697155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.697275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.697308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.697410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.697442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.697551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.697583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.697702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.697734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.697842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.697873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.697988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.698019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.698143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.698174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.698373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.698412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.698515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.698546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.698661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.698692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.698804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.698835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.699006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.699037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.699156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.699187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.699325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.699357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.699480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.699511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.699690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.699727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.699854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.699885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.700058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.700089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.700271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.700304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.700431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.815 [2024-12-09 16:00:49.700462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.815 qpair failed and we were unable to recover it. 00:27:54.815 [2024-12-09 16:00:49.700650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.700682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.700812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.700844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.700948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.700979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.701184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.701228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.701351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.701382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.701570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.701600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.701733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.701764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.701869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.701899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.702156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.702188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.702301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.702333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.702446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.702477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.702593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.702625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.702734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.702765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.702946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.702976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.703108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.703141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.703247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.703280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.703385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.703417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.703544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.703575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.703829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.703860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.704071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.704103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.704208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.704248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.704362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.704393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.704511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.704542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.704657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.704689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.704791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.704823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.704928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.704959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.705127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.705158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.705356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.705396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.705513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.705544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.705672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.705705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.705947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.705979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.706154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.706189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.706320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.706352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.706470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.706501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.706678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.706716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.706835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.706866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.707034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.707065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.707184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.816 [2024-12-09 16:00:49.707215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.816 qpair failed and we were unable to recover it. 00:27:54.816 [2024-12-09 16:00:49.707347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.707380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.707547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.707580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.707816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.707847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.707983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.708014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.708142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.708174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.708285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.708318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.708434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.708466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.708571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.708602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.708792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.708824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.709032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.709061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.709183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.709211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.709341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.709369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.709543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.709571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.709686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.709714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.709832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.709860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.709967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.709996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.710200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.710237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.710468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.710497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.710610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.710639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.710736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.710764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.710933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.710962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.711061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.711090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.711273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.711303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.711477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.711509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.711716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.711747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.711881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.711912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.712047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.712078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.712199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.712236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.712418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.712450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.712626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.712662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.712834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.712865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.712982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.713011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.713175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.713202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.713364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.713397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.713508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.713539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.713645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.713676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.817 [2024-12-09 16:00:49.713877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.817 [2024-12-09 16:00:49.713919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.817 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.714026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.714053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.714157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.714187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.714387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.714416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.714531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.714559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.714728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.714757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.714864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.714891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.715023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.715051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.715236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.715266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.715363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.715391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.715511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.715539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.715632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.715659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.715773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.715801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.716042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.716072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.716174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.716203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.716401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.716430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.716610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.716638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.716816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.716846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.716956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.716984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.717090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.717118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.717244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.717275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.717385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.717413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.717513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.717543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.717647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.717677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.717850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.717878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.718052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.718081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.718262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.718294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.718411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.718442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.718606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.718636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.718818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.718849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.719029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.719061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.719174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.719205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.719318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.719350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.818 qpair failed and we were unable to recover it. 00:27:54.818 [2024-12-09 16:00:49.719520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.818 [2024-12-09 16:00:49.719557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.719670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.719713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.719879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.719908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.720083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.720114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.720286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.720320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.720427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.720458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.720701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.720733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.720942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.720974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.721144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.721176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.721325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.721358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.721463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.721495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.721675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.721705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.721817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.721845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.721945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.721974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.722182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.722213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.722464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.722496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.722682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.722714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.722829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.722860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.723053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.723081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.723182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.723210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.723316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.723345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.723452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.723482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.723650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.723679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.723856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.723885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.724056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.724084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.724198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.724236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.724334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.724362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.724476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.724506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.724671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.724700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.724881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.724911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.725078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.725106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.725209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.725247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.725444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.725473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.725576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.725605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.725706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.725734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.726034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.726062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.726164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.726193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.726368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.726397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.726495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.819 [2024-12-09 16:00:49.726523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.819 qpair failed and we were unable to recover it. 00:27:54.819 [2024-12-09 16:00:49.726728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.726756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.726895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.726942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.727058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.727086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.727185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.727215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.727352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.727380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.727505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.727534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.727649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.727677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.727851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.727880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.727988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.728030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.728201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.728243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.728348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.728378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.728555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.728587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.728691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.728722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.728845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.728876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.729006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.729037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.729150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.729182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.729328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.729362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.729487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.729519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.729636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.729667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.729771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.729801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.729905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.729936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.730065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.730095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.730238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.730271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.730383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.730414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.730583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.730613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.730723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.730755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.730933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.730970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.731160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.731190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.731480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.731512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.731748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.731780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.731906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.731935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.732111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.732142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.732263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.732296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.732433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.732462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.732586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.732618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.732730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.732762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.733003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.733035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.733204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.733244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.733353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.820 [2024-12-09 16:00:49.733384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.820 qpair failed and we were unable to recover it. 00:27:54.820 [2024-12-09 16:00:49.733496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.733527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.733640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.733670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.733807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.733844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.734101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.734131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.734321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.734354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.734555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.734585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.734701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.734732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.734838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.734868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.734997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.735028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.735242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.735276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.735399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.735431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.735539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.735571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.735687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.735718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.735895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.735926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.736053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.736084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.736190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.736229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.736478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.736510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.736635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.736667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.736783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.736814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.736932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.736964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.737200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.737263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.737388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.737421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.737540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.737571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.737688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.737718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.737821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.737853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.737964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.737994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.738240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.738274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.738379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.738410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.738606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.738637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.738747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.738779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.738879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.738910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.739020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.739050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.739163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.739194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.739375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.739407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.739646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.739677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.739791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.739822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.740010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.740041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.740166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.740197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.821 [2024-12-09 16:00:49.740325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.821 [2024-12-09 16:00:49.740357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.821 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.740485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.740515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.740697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.740729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.740918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.740949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.741065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.741101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.741274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.741306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.741419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.741450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.741631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.741662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.741765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.741796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.741897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.741928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.742039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.742069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.742256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.742289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.742416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.742446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.742685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.742716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.742840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.742871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.742987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.743020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.743137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.743167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.743383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.743416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.743553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.743585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.743709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.743739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.743852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.743882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.744065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.744096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.744214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.744254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.744381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.744411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.744531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.744562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.744735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.744766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.744946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.744978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.745116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.745146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.745286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.745318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.745499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.745530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.745660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.745691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.745856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.745926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.746120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.746155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.746278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.746313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.746429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.746461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.746577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.746608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.746724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.746755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.746871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.746902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.747035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.747066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.822 [2024-12-09 16:00:49.747179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.822 [2024-12-09 16:00:49.747211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.822 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.747343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.747376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.747490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.747521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.747714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.747746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.747854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.747885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.748058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.748098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.748235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.748269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.748384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.748416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.748542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.748574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.748692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.748724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.748928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.748961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.749132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.749164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.749285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.749317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.749502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.749534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.749638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.749668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.749861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.749892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.750007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.750039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.750213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.750259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.750440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.750472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.750589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.750621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.750807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.750838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.751020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.751051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.751155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.751186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.751389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.751421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.751525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.751556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.751721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.751752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.751925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.751956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.752132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.752163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.823 [2024-12-09 16:00:49.752275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.823 [2024-12-09 16:00:49.752307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.823 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.752413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.752444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.752559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.752591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.752762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.752795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.752975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.753010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.753186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.753226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.753347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.753378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.753485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.753516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.753623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.753653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.753831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.753861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.754039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.754070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.754242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.754276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.754405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.754438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.754559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.754589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.754708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.754738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.754850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.754882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.754998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.755028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.755134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.755171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.755286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.755317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.755429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.755461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.755597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.755627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.755732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.755763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.755870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.755901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.756079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.756110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.756236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.756269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.756390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.756421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.756540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.756570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.756679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.756709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.756899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.756930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.757032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.757063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.757256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.757289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.757400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.757430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.757668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.757699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.757811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.757841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.758021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.758052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.758184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.758215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.758356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.758387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.758565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.758596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.758718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.758748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.824 qpair failed and we were unable to recover it. 00:27:54.824 [2024-12-09 16:00:49.758873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.824 [2024-12-09 16:00:49.758904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.759014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.759044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.759148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.759178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.759295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.759327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.759501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.759532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.759709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.759745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.759851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.759882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.760054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.760084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.760211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.760250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.760463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.760495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.760608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.760639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.760775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.760805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.760914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.760945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.761059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.761090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.761270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.761302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.761407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.761440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.761565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.761596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.761713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.761745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.761851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.761889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.762011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.762043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.762157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.762189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.762315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.762353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.762465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.762496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.762611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.762643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.762750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.762781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.762896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.762928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.763057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.763088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.763192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.763233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.763420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.763452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.763628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.763660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.763765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.763796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.763913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.763945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.764059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.764091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.764266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.764298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.764484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.825 [2024-12-09 16:00:49.764516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.825 qpair failed and we were unable to recover it. 00:27:54.825 [2024-12-09 16:00:49.764616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.764647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.764819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.764850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.764959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.764990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.765098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.765129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.765259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.765293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.765415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.765446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.765580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.765610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.765733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.765764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.765867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.765898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.766012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.766042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.766167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.766209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.766347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.766380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.766564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.766595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.766702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.766733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.766899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.766930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.767050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.767081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.767205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.767247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.767348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.767380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.767482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.767513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.767644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.767675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.767788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.767820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.767944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.767975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.768080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.768111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.768291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.768324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.768525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.768557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.768659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.768691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.768889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.768920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.769115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.769147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.769336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.769369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.769489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.769520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.769645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.769675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.769847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.769878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.769996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.770026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.770196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.770235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.770344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.770377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.770481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.770512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.770622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.770653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.770827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.770859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.826 [2024-12-09 16:00:49.770968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.826 [2024-12-09 16:00:49.770999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.826 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.771199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.771256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.771388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.771420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.771594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.771624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.771736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.771767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.771961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.771995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.772108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.772138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.772249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.772283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.772394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.772427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.772535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.772566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.772696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.772727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.772845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.772876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.772984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.773020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.773141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.773172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.773281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.773315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.773415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.773446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.773613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.773644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.773769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.773801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.773918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.773949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.774132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.774163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.774275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.774307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.774434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.774465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.774585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.774617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.774790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.774821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.774925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.774956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.775130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.775161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.775300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.775333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.775506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.775536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.775722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.775753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.775940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.775971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.776079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.776110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.776249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.827 [2024-12-09 16:00:49.776283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.827 qpair failed and we were unable to recover it. 00:27:54.827 [2024-12-09 16:00:49.776391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.776423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.776561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.776593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.776833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.776864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.777002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.777033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.777251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.777284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.777533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.777564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.777742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.777774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.777951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.777982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.778172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.778204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.778397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.778428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.778608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.778640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.778756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.778787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.778976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.779006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.779119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.779150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.779413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.779446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.779550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.779580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.779692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.779723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.779825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.779856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.780041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.780072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.780184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.780215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.780411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.780450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.780569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.780600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.780713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.780743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.780867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.780898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.781013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.781043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.781147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.781178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.781429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.781463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.781565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.781596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.781777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.781808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.781939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.781970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.782094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.782125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.782297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.782331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.782458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.782489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.782604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.782635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.782772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.782803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.782976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.828 [2024-12-09 16:00:49.783008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.828 qpair failed and we were unable to recover it. 00:27:54.828 [2024-12-09 16:00:49.783183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.783214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.783412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.783444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.783616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.783647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.783774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.783806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.783923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.783954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.784130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.784161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.784353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.784385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.784581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.784612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.784719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.784749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.784877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.784908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.785011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.785042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.785155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.785188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.785367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.785438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.785641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.785677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.785877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.785909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.786112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.786145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.786261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.786294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.786407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.786438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.786554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.786585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.786705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.786736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.786842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.786873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.786990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.787020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.787139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.787171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.787296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.787329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.787513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.787554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.787678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.787708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.787836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.787872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.787991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.788023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.788194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.788235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.788380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.788413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.788530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.788562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.788668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.788699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.788827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.788858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.788972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.789003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.829 qpair failed and we were unable to recover it. 00:27:54.829 [2024-12-09 16:00:49.789108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.829 [2024-12-09 16:00:49.789138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.789311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.789345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.789557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.789588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.789700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.789733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.789907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.789940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.790065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.790096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.790206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.790250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.790374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.790407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.790529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.790560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.790784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.790817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.791041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.791073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.791191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.791234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.791424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.791455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.791667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.791699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.791873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.791905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.792099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.792130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.792250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.792283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.792463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.792495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.792623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.792654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.792782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.792812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.792917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.792948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.793138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.793168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.793320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.793352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.793530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.793562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.793688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.793719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.793896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.793927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.794043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.794074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.794182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.794213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.794341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.794374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.794544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.794575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.794822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.794861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.794979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.795010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.795145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.795176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.795303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.795335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.795529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.795560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.795734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.795766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.795943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.795974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.796145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.796177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.796304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.830 [2024-12-09 16:00:49.796337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.830 qpair failed and we were unable to recover it. 00:27:54.830 [2024-12-09 16:00:49.796578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.796611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.796729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.796761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.796942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.796974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.797081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.797112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.797245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.797278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.797469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.797501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.797629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.797660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.797779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.797808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.798044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.798076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.798200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.798239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.798360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.798391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.798560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.798590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.798712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.798743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.798856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.798887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.799062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.799093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.799276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.799308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.799433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.799463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.799584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.799616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.799887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.799921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.800032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.800063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.800263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.800297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.800413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.800445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.800621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.800653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.800842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.800873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.800983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.801016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.801132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.801165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.801368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.801400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.801537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.801568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.801748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.801778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.801966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.801998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.802180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.802210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.802340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.802382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.802489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.802520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.802692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.802724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.802909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.802940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.803176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.803208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.803398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.803429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.803543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.803575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.803694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.831 [2024-12-09 16:00:49.803726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.831 qpair failed and we were unable to recover it. 00:27:54.831 [2024-12-09 16:00:49.803854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.803886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.804019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.804050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.804173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.804205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.804343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.804375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.804570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.804601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.804771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.804802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.804927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.804959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.805139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.805170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.805362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.805396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.805513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.805547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.805729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.805760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.805946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.805978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.806154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.806185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.806327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.806361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.806498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.806528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.806722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.806753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.806871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.806903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.807095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.807127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.807243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.807277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.807398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.807430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.807539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.807572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.807684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.807717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.807923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.807954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.808195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.808233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.808468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.808500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.808630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.808662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.808835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.808867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.808987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.809018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.809204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.809247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.809383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.809414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.809529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.809561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.809746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.809777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.809904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.809942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.832 [2024-12-09 16:00:49.810063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.832 [2024-12-09 16:00:49.810095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.832 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.810274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.810307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.810503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.810535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.810664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.810696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.810936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.810969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.811208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.811253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.811533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.811565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.811771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.811801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.811915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.811946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.812151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.812182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.812314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.812347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.812520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.812552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.812744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.812775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.812968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.813000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.813118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.813150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.813302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.813335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.813457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.813489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.813611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.813644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.813828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.813858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.814030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.814061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.814187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.814238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.814348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.814380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.814610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.814642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.814752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.814783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.814955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.814986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.815097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.815127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.815302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.815335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.815452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.815483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.815595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.815627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.815729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.815759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.815878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.815909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.816078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.816110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.816226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.816258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.816452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.816484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.816609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.816640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.816742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.816773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.816885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.816916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.817020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.817051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.817233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.817267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.833 [2024-12-09 16:00:49.817456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.833 [2024-12-09 16:00:49.817492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.833 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.817645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.817677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.817789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.817821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.818015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.818046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.818230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.818263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.818498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.818529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.818724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.818755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.818885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.818916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.819153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.819184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.819344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.819377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.819566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.819597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.819717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.819748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.819848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.819880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.820053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.820084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.820234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.820268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.820384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.820414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.820524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.820556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.820739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.820771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.820952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.820983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.821157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.821189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.821347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.821418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.821657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.821693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.821808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.821841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.822105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.822136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.822329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.822365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.822542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.822574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.822680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.822711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.822936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.823006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.823146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.823179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.823299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.823331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.823437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.823469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.823641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.823672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.823797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.823829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.824001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.824032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.824162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.824193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.824385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.824416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.824529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.824560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.824663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.824693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.824804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.824835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.834 qpair failed and we were unable to recover it. 00:27:54.834 [2024-12-09 16:00:49.824947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.834 [2024-12-09 16:00:49.824977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.825150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.825185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.825315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.825347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.825462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.825493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.825668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.825699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.825817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.825848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.825968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.826000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.826186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.826229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.826420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.826451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.826645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.826676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.826802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.826833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.826944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.826976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.827078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.827108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.827250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.827284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.827459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.827490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.827620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.827652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.827831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.827862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.827979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.828010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.828137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.828168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.828300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.828334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.828456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.828493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.828601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.828634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.828743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.828774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.828953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.828984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.829090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.829121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.829291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.829324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.829513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.829544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.829678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.829709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.829839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.829877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.829983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.830033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.830153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.830184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.830373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.830405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.830588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.830619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.830789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.830820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.831008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.831039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.831209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.831248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.831380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.831411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.831587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.831618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.831795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.835 [2024-12-09 16:00:49.831827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.835 qpair failed and we were unable to recover it. 00:27:54.835 [2024-12-09 16:00:49.832009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.832040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.832152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.832182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.832332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.832371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.832476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.832508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.832619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.832650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.832760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.832790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.832933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.832964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.833072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.833103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.833230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.833263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.833530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.833561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.833751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.833781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.833912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.833942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.834114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.834146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.834280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.834313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.834485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.834516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.834654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.834687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.834826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.834858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.835036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.835067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.835182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.835213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.835326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.835357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.835484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.835515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.835638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.835669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.835790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.835820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.835927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.835958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.836074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.836105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.836274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.836306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.836422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.836453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.836559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.836591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.836761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.836791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.836915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.836952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.837126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.837158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.837281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.837314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.837480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.837512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.837770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.837801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.837991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.838023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.838139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.838169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.838287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.838322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.838436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.838467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.838575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.836 [2024-12-09 16:00:49.838606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.836 qpair failed and we were unable to recover it. 00:27:54.836 [2024-12-09 16:00:49.838730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.838761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.838949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.838981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.839159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.839190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.839416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.839468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.839605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.839638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.839754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.839786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.839891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.839924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.840047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.840079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.840204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.840247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.840356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.840389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.840496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.840528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.840637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.840669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.840849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.840882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.841049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.841081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.841192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.841237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.841420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.841451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.841663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.841694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.841874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.841907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.842014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.842046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.842165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.842197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.842322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.842354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.842473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.842505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.842620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.842652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.842831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.842863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.843032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.843063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.843272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.843306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.843475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.843506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.843627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.843659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.843801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.843833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.843958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.843990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.844114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.844151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.844329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.844361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.844473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.844504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.844614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.844644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.844752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.837 [2024-12-09 16:00:49.844783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.837 qpair failed and we were unable to recover it. 00:27:54.837 [2024-12-09 16:00:49.844970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.845000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.845192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.845241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.845453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.845484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.845714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.845745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.845849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.845879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.845998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.846030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.846157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.846187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.846307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.846343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.846530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.846567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.846687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.846719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.846843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.846875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.847055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.847087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.847233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.847267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.847383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.847415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.847520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.847552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.847669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.847701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.847829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.847861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.848011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.848044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.848240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.848275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.848457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.848489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.848602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.848634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.848907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.848939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.849122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.849155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.849272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.849305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.849419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.849450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.849559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.849592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.849778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.849809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.849951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.849983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.850088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.850120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.850254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.850287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.850496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.850528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.850662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.850693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.850825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.850857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.851036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.851068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.851174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.851206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.851399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.851433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.851545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.851576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.851701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.851732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.838 [2024-12-09 16:00:49.851920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.838 [2024-12-09 16:00:49.851952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.838 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.852070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.852101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.852267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.852299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.852471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.852502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.852628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.852659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.852835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.852866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.852974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.853005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.853112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.853143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.853271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.853303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.853417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.853448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.853620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.853656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.853831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.853862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.853967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.853998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.854121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.854152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.854325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.854358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.854535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.854565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.854699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.854729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.854861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.854892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.854994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.855025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.855232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.855265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.855383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.855414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.855525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.855556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.855680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.855711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.855820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.855850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.855966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.855998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.856130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.856160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.856412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.856445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.856563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.856594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.856710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.856741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.856919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.856950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.857054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.857084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.857334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.857367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.857488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.857519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.857682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.857713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.857818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.857849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.858021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.858052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.858167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.858198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.858428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.858470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.858590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.858621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.858724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.839 [2024-12-09 16:00:49.858754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.839 qpair failed and we were unable to recover it. 00:27:54.839 [2024-12-09 16:00:49.858949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.858980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.859085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.859114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.859238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.859273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.859428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.859459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.859571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.859601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.859715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.859746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.859852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.859883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.859982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.860012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.860114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.860145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.860259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.860291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.860488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.860519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.860725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.860756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.860928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.860960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.861099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.861130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.861319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.861351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.861479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.861510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.861624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.861655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.861756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.861788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.861894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.861925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.862026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.862056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.862250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.862282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.862459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.862489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.862601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.862632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.862753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.862785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.862965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.862997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.863119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.863150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.863269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.863301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.863418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.863449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.863570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.863601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.863862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.863894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.864018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.864049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.864154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.864185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.864352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.864388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.864651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.864682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.864806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.864837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.864944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.864975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.865106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.865136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.865281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.865320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.865517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.840 [2024-12-09 16:00:49.865548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.840 qpair failed and we were unable to recover it. 00:27:54.840 [2024-12-09 16:00:49.865654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.865685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.865807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.865837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.865973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.866003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.866179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.866210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.866342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.866373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.866475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.866505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.866609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.866640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.866827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.866858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.867036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.867067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.867178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.867208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.867400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.867432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.867553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.867583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.867705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.867737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.867842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.867873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.867974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.868004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.868125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.868156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.868338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.868370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.868489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.868519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.868710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.868741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.868864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.868896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.869017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.869047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.869166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.869197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.869335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.869366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.869475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.869506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.869706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.869738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.869954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.869985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.870182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.870213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.870439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.870470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.870644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.870676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.870845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.870877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.871050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.871081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.871255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.871287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.871485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.871516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.871700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.841 [2024-12-09 16:00:49.871732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.841 qpair failed and we were unable to recover it. 00:27:54.841 [2024-12-09 16:00:49.872001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.872031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.872277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.872309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.872417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.872447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.872629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.872660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.872791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.872828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.872929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.872961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.873149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.873179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.873343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.873376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.873481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.873512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.873656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.873687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.873818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.873849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.873973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.874003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.874188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.874230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.874431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.874464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.874647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.874679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.874786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.874817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.875010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.875042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.875315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.875347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.875468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.875500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.875632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.875663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.875780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.875812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.875927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.875958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.876075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.876106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.876275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.876307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.876439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.876471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.876579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.876610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.876736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.876767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.877009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.877040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.877250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.877282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.877454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.877485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.877588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.877620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.877832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.877863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.877994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.878026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.878127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.878159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.878271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.878303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.878435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.878468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.878585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.878616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.878791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.878822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.878949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.842 [2024-12-09 16:00:49.878981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.842 qpair failed and we were unable to recover it. 00:27:54.842 [2024-12-09 16:00:49.879150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.879186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.879389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.879425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.879604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.879637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.879827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.879860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.879976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.880008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.880288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.880328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.880444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.880476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.880612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.880644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.880764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.880797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.880899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.880932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.881047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.881079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.881200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.881241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.881417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.881449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.881558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.881590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.881766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.881798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.881905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.881939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.882065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.882097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.882228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.882261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.882442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.882475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.882593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.882625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.882738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.882770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.882944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.882975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.883084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.883115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.883292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.883325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.883503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.883535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.883707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.883740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.884012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.884044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.884236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.884270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.884387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.884419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.884605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.884637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.884753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.884784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.884900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.884931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.885109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.885142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.885260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.885294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.885400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.885433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.885622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.885653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.885774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.885806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.885987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.886018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.886138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.843 [2024-12-09 16:00:49.886170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.843 qpair failed and we were unable to recover it. 00:27:54.843 [2024-12-09 16:00:49.886317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.886351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.886534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.886566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.886687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.886720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.886827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.886860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.887026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.887057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.887333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.887365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.887475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.887512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.887686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.887717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.887900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.887931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.888114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.888147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.888266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.888300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.888496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.888528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.888656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.888687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.888802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.888835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.888948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.888979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.889249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.889283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.889419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.889451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.889579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.889610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.889851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.889884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.890052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.890085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.890213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.890257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.890434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.890466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.890653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.890685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.890860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.890893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.891004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.891036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.891152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.891184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.891375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.891419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.891531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.891564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.891677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.891709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.891838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.891869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.891994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.892025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.892264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.892297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.892420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.892451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.892584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.892616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.892754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.892786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.892916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.892947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.893061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.893093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.893273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.893307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.893434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.844 [2024-12-09 16:00:49.893465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.844 qpair failed and we were unable to recover it. 00:27:54.844 [2024-12-09 16:00:49.893639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.893670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.893846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.893877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.894004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.894035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.894231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.894263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.894437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.894469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.894658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.894689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.894876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.894907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.895018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.895054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.895263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.895296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.895468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.895499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.895676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.895707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.895883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.895914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.896141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.896172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.896365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.896396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.896584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.896615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.896736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.896767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.896875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.896905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.897052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.897082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.897249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.897283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.897391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.897422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.897601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.897633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.897822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.897853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.897957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.897989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.898158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.898187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.898377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.898410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.898590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.898623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.898748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.898778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.899058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.899090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.899206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.899247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.899378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.899408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.899590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.899621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.899828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.899860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.900060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.900091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.900214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.900255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.900434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.845 [2024-12-09 16:00:49.900464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.845 qpair failed and we were unable to recover it. 00:27:54.845 [2024-12-09 16:00:49.900730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.900762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.900931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.900961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.901081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.901113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.901239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.901271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.901390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.901421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.901604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.901634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.901748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.901778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.901950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.901981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.902229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.902261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.902387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.902418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.902529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.902558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.902675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.902706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.902811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.902846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.902956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.902986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.903087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.903117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.903245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.903278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.903383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.903413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.903605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.903637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.903808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.903838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.903943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.903973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.904173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.904202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.904353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.904383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.904498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.904529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.904768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.904800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.904905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.904936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.905037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.905068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.905194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.905236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.905373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.905403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.905511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.905540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.905727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.905757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.905887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.905917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.906019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.906051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.906173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.906202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.906332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.906365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.906536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.906567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.906740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.906771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.906892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.906923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.907039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.907069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.907183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.846 [2024-12-09 16:00:49.907214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.846 qpair failed and we were unable to recover it. 00:27:54.846 [2024-12-09 16:00:49.907349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.907381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.907564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.907595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.907707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.907736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.907918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.907950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.908053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.908084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.908276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.908309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.908426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.908458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.908565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.908594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.908843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.908873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.909048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.909078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.909255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.909288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.909463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.909494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.909599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.909628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.909751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.909786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.909913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.909945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.910124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.910154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.910334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.910367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.910539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.910570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.910705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.910735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.910977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.911008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.911209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.911250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.911421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.911451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.911577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.911612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.911800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.911832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.911953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.911984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.912091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.912121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.912258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.912291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.912482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.912515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.912637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.912667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.912784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.912815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.912924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.912955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.913131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.913161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.913300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.913332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.913543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.913574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.913812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.913842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.914028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.914059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.914242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.914275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.914395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.914425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.914595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.847 [2024-12-09 16:00:49.914626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.847 qpair failed and we were unable to recover it. 00:27:54.847 [2024-12-09 16:00:49.914867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.914899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.915090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.915122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.915229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.915261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.915447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.915477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.915738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.915770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.915952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.915981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.916107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.916137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.916313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.916345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.916518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.916550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.916722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.916751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.916872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.916903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.917139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.917170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.917282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.917314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.917424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.917454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.917578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.917615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.917804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.917836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.917949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.917980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.918188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.918225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.918405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.918437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.918609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.918642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.918755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.918785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.919021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.919052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.919167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.919197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.919399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.919432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.919645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.919677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.919781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.919812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.919996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.920028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.920290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.920324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.920533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.920564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.920755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.920786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.921022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.921054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.921256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.921287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.921471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.921502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.921695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.921725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.921909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.921941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.922116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.922147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.922319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.922351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.922518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.922548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.922812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.848 [2024-12-09 16:00:49.922844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.848 qpair failed and we were unable to recover it. 00:27:54.848 [2024-12-09 16:00:49.922957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.922987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.923117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.923148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.923281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.923311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.923483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.923516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.923705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.923736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.923903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.923935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.924114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.924145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.924408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.924443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.924553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.924583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.924850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.924880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.925066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.925097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.925200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.925240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.925370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.925401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.925507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.925537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.925789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.925820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.926059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.926095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.926356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.926387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.926646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.926677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.926847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.926878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.927067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.927099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.927230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.927263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.927383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.927415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.927616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.927647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.927767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.927799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.928034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.928064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.928271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.928304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.928546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.928577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.928679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.928710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.928883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.928914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.929044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.929075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.929190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.929226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.929445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.929476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.929648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.929679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.929865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.929896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.930071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.930103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.930341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.930374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.930489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.930519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.930627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.849 [2024-12-09 16:00:49.930658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.849 qpair failed and we were unable to recover it. 00:27:54.849 [2024-12-09 16:00:49.930829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.930859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.931041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.931071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.931316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.931348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.931521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.931553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.931673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.931705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.931886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.931917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.932050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.932081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.932252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.932285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.932473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.932504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.932706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.932737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.932974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.933006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.933122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.933152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.933390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.933424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.933595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.933625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.933863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.933894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.934153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.934183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.934308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.934340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.934548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.934585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.934707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.934738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.934858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.934888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.935149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.935180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.935518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.935553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.935738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.935769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.935952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.935982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.936256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.936290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.936405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.936436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.936623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.936653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.936832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.936863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.936980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.937010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.937183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.937214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.937410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.937442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.937568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.937598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.937791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.937822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.938061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.850 [2024-12-09 16:00:49.938094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.850 qpair failed and we were unable to recover it. 00:27:54.850 [2024-12-09 16:00:49.938212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.938254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.938458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.938490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.938660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.938692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.938881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.938913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.939151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.939182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.939450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.939482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.939745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.939775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.939887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.939918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.940091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.940123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.940310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.940342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.940520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.940551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.940734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.940766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.940949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.940980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.941110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.941141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.941366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.941400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.941597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.941629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.941812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.941844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.942081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.942112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.942243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.942276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.942451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.942482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.942653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.942686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.942795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.942826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.943068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.943100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.943274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.943311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.943486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.943517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.943711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.943744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.943932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.943964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.944146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.944177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.944390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.944423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.944542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.944572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.944752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.944783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.944993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.945024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.945291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.945324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.945506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.945538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.945755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.945786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.946046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.946078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.946272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.946304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.946429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.946462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.946643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.851 [2024-12-09 16:00:49.946674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.851 qpair failed and we were unable to recover it. 00:27:54.851 [2024-12-09 16:00:49.946924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.946954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.947085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.947115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.947417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.947451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.947700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.947732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.947915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.947946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.948062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.948093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.948353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.948387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.948508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.948538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.948797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.948828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.949009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.949040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.949173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.949204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.949330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.949363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.949538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.949569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.949687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.949717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.949822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.949854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.950036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.950067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.950187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.950228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.950429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.950460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.950626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.950658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.950782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.950812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.951063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.951095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.951196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.951235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.951487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.951518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.951753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.951784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.952021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.952058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.952183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.952214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.952403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.952434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.952622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.952654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.952822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.952853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.953023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.953055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.953171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.953202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.953409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.953443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.953644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.953674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.953933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.953965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.954148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.954179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.954379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.954412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.954608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.954641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.954905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.954936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.955064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.852 [2024-12-09 16:00:49.955094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.852 qpair failed and we were unable to recover it. 00:27:54.852 [2024-12-09 16:00:49.955284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.955316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.955502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.955534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.955636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.955667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.955840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.955870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.956038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.956068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.956176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.956209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.956408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.956439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.956550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.956581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.956760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.956791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.956973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.957004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.957177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.957209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.957333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.957363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.957649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.957680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.957884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.957917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.958103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.958134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.958347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.958381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.958500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.958531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.958707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.958738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.958923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.958952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.959213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.959256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.959491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.959522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.959726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.959757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.959929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.959960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.960172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.960204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.960423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.960454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.960657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.960689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.960886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.960917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.961188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.961229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.961353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.961383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.961559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.961599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.961697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.961727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.961964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.961995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.962128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.962159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.962343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.962375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.962544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.962574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.962859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.962890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.963056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.963085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.963347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.963379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.963635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.853 [2024-12-09 16:00:49.963667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.853 qpair failed and we were unable to recover it. 00:27:54.853 [2024-12-09 16:00:49.963878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.963908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.964095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.964127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.964265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.964297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.964468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.964500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.964702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.964734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.964873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.964903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.965089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.965121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.965301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.965333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.965531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.965562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.965668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.965698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.965836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.965868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.966038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.966070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.966193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.966232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.966373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.966409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.966597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.966628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.966823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.966854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.966983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.967014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.967205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.967247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.967421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.967451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.967736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.967767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.967889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.967921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.968038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.968068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.968253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.968306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.968555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.968588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.968721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.968752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.968939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.968968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.969140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.969171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.969365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.969397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.969522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.969552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.969668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.969698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.969800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.969830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.970084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.970115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.970365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.970397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.970609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.970640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.970809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.854 [2024-12-09 16:00:49.970838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.854 qpair failed and we were unable to recover it. 00:27:54.854 [2024-12-09 16:00:49.971106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.971137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.971270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.971302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.971417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.971447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.971641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.971670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.971957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.971988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.972160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.972190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.972385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.972417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.972679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.972711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.972902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.972933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.973099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.973132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.973263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.973296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.973428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.973458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.973582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.973614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.973744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.973773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.974034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.974064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.974259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.974290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.974552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.974582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.974766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.974796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.974987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.975023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.975190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.975229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.975357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.975390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.975597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.975629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.975745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.975775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.975958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.975988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.976180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.976211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.976423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.976454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.976709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.976741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.976999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.977031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.977229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.977262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.977469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.977500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.977669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.977700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.977968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.977998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.978194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.978233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.978373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.978404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.978691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.978724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.978984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.979015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.979155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.979186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.979383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.979415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.855 qpair failed and we were unable to recover it. 00:27:54.855 [2024-12-09 16:00:49.979689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.855 [2024-12-09 16:00:49.979720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.979841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.979872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.980082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.980112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.980299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.980331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.980520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.980551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.980741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.980771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.980950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.980982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.981169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.981200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.981389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.981421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.981531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.981561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.981759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.981791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.981911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.981942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.982209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.982247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.982513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.982545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.982733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.982765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.982877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.982909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.983036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.983067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.983308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.983342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.983576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.983606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.983865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.983895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.984077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.984114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.984293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.984325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.984528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.984560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.984685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.984715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.984897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.984928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.985136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.985166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.985433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.985466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.985662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.985692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.985873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.985903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.986023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.986054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.986248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.986280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.986400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.986428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.986625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.986655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.986900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.986930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.987179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.987212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.987363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.987396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.987514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.987544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.987714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.987743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.987947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.987978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.988164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.988195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.988336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.988368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.988550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.988580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.856 [2024-12-09 16:00:49.988774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.856 [2024-12-09 16:00:49.988806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.856 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.988939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.988968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.989077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.989108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.989278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.989309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.989571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.989602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.989742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.989773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.989945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.989977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.990171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.990203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.990429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.990460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.990589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.990619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.990793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.990824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.991022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.991053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.991191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.991230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.991421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.991452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.991634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.991665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.991905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.991936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.992055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.992086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.992199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.992257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.992365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.992401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.992505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.992537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.992650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.992680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.992868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.992901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.993028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.993058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.993243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.993275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.993466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.993496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.993606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.993635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.993887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.993919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.994198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.994236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.994368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.994401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.994666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.994696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.994866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.994897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.995134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.995165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.995281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.995314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.995433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.995463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.995650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.995682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.995805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.995834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.996030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.996061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.996270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.996303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.996410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.996443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.996646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.996677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.996860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.996890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.996999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.997029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.857 qpair failed and we were unable to recover it. 00:27:54.857 [2024-12-09 16:00:49.997276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.857 [2024-12-09 16:00:49.997309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:49.997546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:49.997577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:49.997762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:49.997793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:49.997984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:49.998014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:49.998189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:49.998226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:49.998362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:49.998392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:49.998632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:49.998662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:49.998896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:49.998927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:49.999098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:49.999128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:49.999240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:49.999274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:49.999474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:49.999506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:49.999742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:49.999773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.000012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.000044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.000182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.000212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.000419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.000449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.000674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.000705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.000887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.000924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.001048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.001080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.001209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.001253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.001383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.001415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.001516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.001548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.001683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.001714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.001902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.001934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.002128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.002160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.002358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.002391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.002519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.002552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.002723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.002755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.002994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.003026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.003208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.003251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.003442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.003475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.003704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.003738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.003874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.003907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.004017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.004049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.004173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.004205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.004342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.004376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.004559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.004590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.004785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.004817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.004939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.004970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.005143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.005177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.005318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.005350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.005534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.005565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.005674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.005707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.858 [2024-12-09 16:00:50.005811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.858 [2024-12-09 16:00:50.005843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.858 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.005982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.006013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.006198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.006242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.006372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.006405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.006513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.006544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.006665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.006698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.006867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.006898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.007017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.007049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.007155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.007188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.007307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.007339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.007536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.007569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.007694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.007725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.007910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.007943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.008080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.008113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.008294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.008334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.008443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.008473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.008668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.008699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.008825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.008855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.008980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.009011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.009208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.009259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.009372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.009410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.009610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.009650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.009710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf00460 (9): Bad file descriptor 00:27:54.859 [2024-12-09 16:00:50.009979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.010061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.010307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.010360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.859 qpair failed and we were unable to recover it. 00:27:54.859 [2024-12-09 16:00:50.010519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.859 [2024-12-09 16:00:50.010563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.010792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.010852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.011038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.011073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.011266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.011327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.011488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.011535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.011719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.011765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.011954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.012000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.012178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.012236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.012449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.012494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.012666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.012723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.012932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.013000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.013241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.013288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.013463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.013511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.013695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.013740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.013960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.014004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.014166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.014232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.014415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.014455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.014584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.014617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.014744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.014776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.014911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.014942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.015064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.015095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.015226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.015260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.015387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.015417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.015530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.015562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.015695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.015726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.015837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.015867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.016004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.016036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.016208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.016247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.016431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.016464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:54.860 qpair failed and we were unable to recover it. 00:27:54.860 [2024-12-09 16:00:50.016667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.860 [2024-12-09 16:00:50.016698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.016823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.016856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.016977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.017007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.017116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.017147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.017329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.017362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.017474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.017505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.017617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.017648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.017917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.017949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.018067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.018100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.018227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.018260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.018446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.018479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.018668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.018700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.018894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.018926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.019048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.019080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.019276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.019316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.019501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.019533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.019773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.019805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.019925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.019957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.020237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.020271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.020398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.020429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.020671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.020703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.020881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.020914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.021051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.021084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.021239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.021273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.021406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.021438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.021611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.021641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.021756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.021787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.021911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.140 [2024-12-09 16:00:50.021943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.140 qpair failed and we were unable to recover it. 00:27:55.140 [2024-12-09 16:00:50.022213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.022254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.022429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.022463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.022656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.022687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.022867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.022898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.023012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.023044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.023214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.023255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.023366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.023399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.023608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.023641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.023899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.023929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.024129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.024162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.024387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.024421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.024610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.024642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.024825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.024857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.025141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.025173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.025376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.025409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.025654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.025687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.025872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.025904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.026091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.026122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.026380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.026413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.026609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.026641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.026890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.026922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.027149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.027181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.027374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.027406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.027602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.027634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.027869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.027901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.028025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.028057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.028320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.028363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.028491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.028523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.028714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.028747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.028982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.029014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.029146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.029179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.029388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.029422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.029529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.029561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.029777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.029810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.029936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.029967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.030159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.030191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.030450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.030483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.030657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.030688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.141 [2024-12-09 16:00:50.030866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.141 [2024-12-09 16:00:50.030897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.141 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.031071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.031102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.031292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.031326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.031495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.031526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.031722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.031754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.032073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.032106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.032296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.032330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.032456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.032487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.032673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.032705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.032900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.032931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.033049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.033080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.033244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.033278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.033531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.033564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.033760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.033792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.033982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.034013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.034125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.034155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.034332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.034364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.034483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.034513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.034624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.034655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.034784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.034816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.035061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.035093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.035240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.035273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.035385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.035417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.035612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.035644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.035757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.035787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.035938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.035970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.036117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.036149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.036479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.036521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.036734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.036814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.036973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.037008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.037125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.037157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.037342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.037375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.037496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.037527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.037650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.037682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.037800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.037831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.037964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.037997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.038123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.038154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.038326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.038359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.038483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.038515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.142 [2024-12-09 16:00:50.038644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.142 [2024-12-09 16:00:50.038676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.142 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.038791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.038822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.039012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.039044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.039338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.039373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.039501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.039535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.039657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.039690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.039795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.039829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.040003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.040039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.040243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.040279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.040491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.040524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.040705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.040738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.040858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.040889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.041077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.041109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.041227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.041260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2157863 Killed "${NVMF_APP[@]}" "$@" 00:27:55.143 [2024-12-09 16:00:50.041383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.041414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.041603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.041634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.041905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.041937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:55.143 [2024-12-09 16:00:50.042051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.042083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.042263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.042296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.042413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:55.143 [2024-12-09 16:00:50.042445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.042617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.042647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:55.143 [2024-12-09 16:00:50.042846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.042879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.042974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.043005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:55.143 [2024-12-09 16:00:50.043180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.043212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.043355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.143 [2024-12-09 16:00:50.043387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.043507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.043539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.043659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.043689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.043931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.043969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.044143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.044174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.044355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.044388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.044571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.044602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.044736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.044768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.044903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.044934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.045134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.045165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.045353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.045384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.045489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.045520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.045654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.045685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.143 [2024-12-09 16:00:50.045815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.143 [2024-12-09 16:00:50.045846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.143 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.045960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.045992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.046186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.046229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.046348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.046379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.046500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.046531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.046645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.046677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.046787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.046818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.046991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.047022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.047198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.047240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.047412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.047441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.047623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.047654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.047759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.047788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.047908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.047939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.048192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.048239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.048438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.048469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.048648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.048681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.048867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.048898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.049012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.049044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.049166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.049198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.049459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.049492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.049729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.049760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.049946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.049977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2158578 00:27:55.144 [2024-12-09 16:00:50.050172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.050204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.050337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.050370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2158578 00:27:55.144 [2024-12-09 16:00:50.050541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.050572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:55.144 [2024-12-09 16:00:50.050749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.050781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2158578 ']' 00:27:55.144 [2024-12-09 16:00:50.050962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.050994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.051098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.051127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.144 [2024-12-09 16:00:50.051301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.051335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.051475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.051507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:55.144 [2024-12-09 16:00:50.051609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.144 [2024-12-09 16:00:50.051639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.144 qpair failed and we were unable to recover it. 00:27:55.144 [2024-12-09 16:00:50.051807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.051841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.052031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.052063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:55.145 [2024-12-09 16:00:50.052268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.052301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.052419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.145 [2024-12-09 16:00:50.052451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.052576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.052608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.052783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.052814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.052937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.052968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.053137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.053169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.053372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.053405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.053668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.053700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.053835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.053870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.054017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.054054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.054243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.054278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.054485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.054517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.054702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.054733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.054907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.054939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.055067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.055102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.055233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.055269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.055402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.055433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.055626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.055658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.055896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.055927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.056052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.056084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.056269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.056303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.056414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.056446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.056720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.056751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.056855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.056886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.057024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.057055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.057266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.057300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.057525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.057556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.057682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.057714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.057882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.057914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.058040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.058071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.058264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.058297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.058490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.058521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.058698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.058730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.058990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.059021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.059208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.059254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.145 [2024-12-09 16:00:50.059382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.145 [2024-12-09 16:00:50.059412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.145 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.059518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.059549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.059754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.059786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.059912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.059943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.060146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.060177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.060360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.060393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.060588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.060619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.060798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.060829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.061006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.061037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.061281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.061314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.061509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.061541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.061659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.061691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.061809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.061841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.061962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.061995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.062198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.062241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.062419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.062451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.062663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.062695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.062818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.062848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.063041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.063073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.063199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.063238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.063412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.063444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.063556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.063587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.063716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.063750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.063876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.063907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.064095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.064127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.064248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.064282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.064461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.064498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.064602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.064634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.064914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.064946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.065080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.065112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.065292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.065326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.065432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.065463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.065639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.065671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.065854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.065885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.065998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.066030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.066236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.066269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.066385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.066417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.066643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.066674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.066800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.066831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.146 qpair failed and we were unable to recover it. 00:27:55.146 [2024-12-09 16:00:50.067063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.146 [2024-12-09 16:00:50.067095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.067215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.067258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.067543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.067575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.067757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.067788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.067911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.067944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.068127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.068158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.068351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.068384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.068508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.068540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.068729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.068760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.068929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.068961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.069077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.069108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.069246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.069280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.069389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.069419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.069682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.069714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.069843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.069874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.070087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.070118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.070235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.070269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.070444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.070476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.070602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.070633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.070752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.070784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.070894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.070925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.071106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.071137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.071259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.071292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.071410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.071453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.071562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.071593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.071850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.071881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.071998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.072029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.072211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.072252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.072438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.072477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.072792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.072825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.073061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.073092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.073330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.073364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.073477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.073508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.073622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.073654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.073820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.073852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.074058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.074091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.074338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.074371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.074491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.074522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.074649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.074680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.074800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.147 [2024-12-09 16:00:50.074831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.147 qpair failed and we were unable to recover it. 00:27:55.147 [2024-12-09 16:00:50.075009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.075041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.075167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.075199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.075403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.075434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.075537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.075568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.075703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.075735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.075865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.075896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.076042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.076074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.076235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.076268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.076444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.076475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.076627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.076659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.076822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.076853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.076970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.077003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.077166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.077197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.077383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.077414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.077566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.077596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.077792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.077830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.077952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.077983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.078104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.078136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.078272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.078306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.078438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.078468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.078589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.078620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.078720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.078751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.078922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.078953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.079123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.079155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.079259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.079291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.079477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.079508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.079616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.079647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.079831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.079862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.080044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.080076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.080309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.080380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.080593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.080630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.080814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.080846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.081039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.081072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.081268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.081303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.081438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.081471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.081677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.081710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.081853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.081886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.082111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.082142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.082254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.148 [2024-12-09 16:00:50.082287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.148 qpair failed and we were unable to recover it. 00:27:55.148 [2024-12-09 16:00:50.082565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.082597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.082712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.082744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.082929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.082961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.083079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.083121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.083345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.083381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.083556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.083589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.083717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.083749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.083975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.084007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.084134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.084184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.084444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.084511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.084714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.084749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.085019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.085051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.085245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.085279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.085461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.085493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.085611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.085643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.085829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.085861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.086042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.086078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.086278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.086311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.086476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.086507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.086694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.086726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.086842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.086872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.086982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.087014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.087181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.087212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.087419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.087451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.087627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.087659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.087778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.087809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.087934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.087966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.088088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.088119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.088230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.088262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.088386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.088417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.088702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.088737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.088850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.088882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.089026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.089058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.089203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.089249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.089359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.089391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.149 qpair failed and we were unable to recover it. 00:27:55.149 [2024-12-09 16:00:50.089563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.149 [2024-12-09 16:00:50.089596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.089772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.089804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.089922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.089954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.090123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.090155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.090270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.090303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.090417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.090449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.090553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.090585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.090846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.090877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.090987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.091025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.091203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.091247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.091419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.091451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.091636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.091668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.091786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.091818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.092076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.092107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.092240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.092273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.092466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.092497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.092603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.092634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.092805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.092836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.093111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.093141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.093327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.093359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.093532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.093563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.093695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.093727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.093861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.093892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.094121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.094152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.094288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.094321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.094497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.094528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.094649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.094679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.094792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.094824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.095091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.095123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.095282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.095314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.095584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.095615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.095712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.095743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.095925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.095956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.096129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.096160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.096384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.096417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.096634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.096702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.096906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.096942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.097123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.150 [2024-12-09 16:00:50.097156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.150 qpair failed and we were unable to recover it. 00:27:55.150 [2024-12-09 16:00:50.097269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.097307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.097498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.097531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.097684] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:27:55.151 [2024-12-09 16:00:50.097717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.097740] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:55.151 [2024-12-09 16:00:50.097756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.097994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.098024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.098264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.098296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.098423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.098453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.098571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.098600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.098713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.098743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.098921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.098951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.099052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.099091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.099297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.099329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.099452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.099482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.099704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.099736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.099857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.099890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.100024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.100058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.100318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.100352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.100591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.100625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.100884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.100917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.101103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.101136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.101316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.101351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.101484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.101517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.101629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.101660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.101919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.101953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.102085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.102119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.102300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.102336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.102447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.102477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.102601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.102634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.102750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.102781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.102892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.102924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.103031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.103064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.103231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.103265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.103452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.103485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.103609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.103642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.103755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.103786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.103972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.104006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.104194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.104234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.104501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.104534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.151 [2024-12-09 16:00:50.104642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.151 [2024-12-09 16:00:50.104674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.151 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.104875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.104907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.105027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.105059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.105200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.105242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.105368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.105400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.105532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.105565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.105694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.105726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.105901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.105935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.106066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.106097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.106377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.106412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.106590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.106622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.106860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.106891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.107009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.107049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.107178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.107210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.107465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.107498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.107735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.107768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.107952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.107983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.108156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.108189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.108305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.108338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.108452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.108485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.108655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.108687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.108878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.108910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.109091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.109124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.109318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.109352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.109475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.109508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.109709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.109741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.109934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.109968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.110177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.110210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.110329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.110363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.110540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.110574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.110693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.110726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.110925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.110958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.111090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.111124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.111252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.111286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.111464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.111496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.111691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.111725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.111973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.112006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.112192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.112233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.112470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.112503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.152 qpair failed and we were unable to recover it. 00:27:55.152 [2024-12-09 16:00:50.112723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.152 [2024-12-09 16:00:50.112757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.112889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.112923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.113164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.113197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.113470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.113502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.113676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.113709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.113833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.113865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.114088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.114121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.114363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.114398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.114540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.114573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.114746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.114779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.114971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.115004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.115181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.115215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.115364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.115397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.115566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.115605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.115723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.115754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.115991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.116025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.116202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.116244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.116527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.116560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.116826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.116859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.117022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.117054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.117256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.117291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.117560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.117592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.117712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.117745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.117943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.117975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.118095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.118129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.118325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.118360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.118554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.118587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.118704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.118737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.118868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.118901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.119072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.119104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.119282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.119316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.119453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.119486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.119731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.119766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.120078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.120110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.120349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.153 [2024-12-09 16:00:50.120382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.153 qpair failed and we were unable to recover it. 00:27:55.153 [2024-12-09 16:00:50.120518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.120551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.120669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.120699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.120957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.120988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.121160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.121193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.121322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.121353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.121620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.121655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.121853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.121887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.122072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.122104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.122208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.122254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.122461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.122493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.122683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.122715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.122975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.123008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.123133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.123164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.123317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.123351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.123537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.123570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.126357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.126395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.126583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.126615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.126874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.126908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.127030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.127068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.127194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.127234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.127422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.127454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.127643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.127676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.127871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.127903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.128070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.128102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.128342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.128376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.128620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.128652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.128832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.128864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.129038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.129070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.129244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.129277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.129458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.129490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.129667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.129700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.129938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.129970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.130187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.130229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.130344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.130376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.130565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.130597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.130769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.130801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.130969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.131001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.131116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.131147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.131356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.154 [2024-12-09 16:00:50.131389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.154 qpair failed and we were unable to recover it. 00:27:55.154 [2024-12-09 16:00:50.131596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.131629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.131836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.131870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.131981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.132013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.132124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.132156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.132349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.132381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.132501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.132534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.132643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.132678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.132918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.132951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.133076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.133107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.133349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.133383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.133501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.133533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.133716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.133748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.133921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.133952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.134070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.134103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.134252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.134286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.134478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.134512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.134716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.134747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.134958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.134990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.135179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.135213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.135421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.135459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.135649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.135682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.135938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.135970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.136104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.136136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.136333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.136367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.136497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.136533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.136720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.136753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.136961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.136994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.137184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.137228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.137351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.137383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.137519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.137552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.137747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.137778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.137896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.137929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.138046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.138077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.138201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.138259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.138450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.138484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.138724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.138758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.138995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.139027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.139226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.139262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.155 [2024-12-09 16:00:50.139559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.155 [2024-12-09 16:00:50.139592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.155 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.139711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.139744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.139935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.139967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.140159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.140193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.140305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.140337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.140524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.140558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.140782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.140815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.140989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.141023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.141288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.141343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.141546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.141581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.141777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.141810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.141990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.142022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.142135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.142168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.142373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.142407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.142646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.142678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.142866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.142899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.143137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.143169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.143301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.143334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.143510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.143543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.143664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.143696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.143885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.143917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.144041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.144072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.144209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.144255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.144526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.144558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.144746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.144778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.144958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.144990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.145126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.145157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.145289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.145323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.145444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.145476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.145658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.145691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.145873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.145906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.146010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.146043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.146153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.146184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.146348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.146417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.146580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.146649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.146795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.146836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.147017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.147049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.147239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.147272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.147537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.147570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.147684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.147717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.156 qpair failed and we were unable to recover it. 00:27:55.156 [2024-12-09 16:00:50.147886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.156 [2024-12-09 16:00:50.147918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.148103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.148136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.148313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.148346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.148472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.148503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.148677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.148709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.148882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.148915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.149034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.149066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.149201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.149257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.149382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.149414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.149591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.149623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.149894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.149926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.150106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.150138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.150326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.150360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.150486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.150519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.150716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.150748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.150923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.150955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.151155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.151186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.151369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.151403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.151602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.151633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.151760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.151792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.151991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.152024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.152198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.152241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.152451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.152486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.152617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.152649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.152774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.152806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.153100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.153132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.153322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.153355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.153526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.153557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.153736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.153768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.153893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.153925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.154098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.154130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.154250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.154284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.154400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.154432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.154607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.154639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.154768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.154800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.155040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.155077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.155200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.155242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.155450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.155483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.155656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.155689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.157 qpair failed and we were unable to recover it. 00:27:55.157 [2024-12-09 16:00:50.155895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.157 [2024-12-09 16:00:50.155927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.156127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.156161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.156421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.156454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.156672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.156704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.156895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.156927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.157110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.157142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.157270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.157305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.157551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.157583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.157708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.157740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.157863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.157895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.158003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.158036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.158250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.158285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.158412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.158445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.158553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.158585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.158772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.158804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.158913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.158945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.159189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.159234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.159422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.159454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.159644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.159677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.159874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.159905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.160165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.160197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.160402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.160434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.160680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.160713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.160927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.160958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.161070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.161103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.161287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.161322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.161446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.161478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.161596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.161628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.161797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.161831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.162040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.162073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.162188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.162228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.162422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.162455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.162644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.162678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.162863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.162895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.163017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.158 [2024-12-09 16:00:50.163051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.158 qpair failed and we were unable to recover it. 00:27:55.158 [2024-12-09 16:00:50.163312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.163346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.163585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.163622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.163883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.163916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.164107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.164140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.164266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.164299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.164490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.164521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.164639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.164672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.164860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.164891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.165160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.165192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.165482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.165515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.165691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.165723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.165892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.165924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.166128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.166161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.166290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.166323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.166507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.166539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.166829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.166861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.167051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.167084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.167291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.167326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.167556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.167590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.167770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.167802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.167993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.168025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.168139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.168172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.168288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.168321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.168558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.168591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.168769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.168802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.168982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.169013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.169196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.169239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.169450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.169483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.169663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.169697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.169936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.169970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.170164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.170196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.170314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.170351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.170593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.170625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.170751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.170784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.170958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.170993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.171250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.171287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.171417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.171449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.171641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.171674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.171870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.159 [2024-12-09 16:00:50.171903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.159 qpair failed and we were unable to recover it. 00:27:55.159 [2024-12-09 16:00:50.172032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.172064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.172175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.172208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.172429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.172467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.172598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.172631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.172830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.172862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.173099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.173133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.173402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.173437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.173611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.173644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.173821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.173855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.174040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.174072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.174254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.174289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.174555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.174589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.174779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.174811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.174991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.175024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.175144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.175178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.175370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.175404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.175595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.175628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.175798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.175831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.175958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.175992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.176162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.176194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.176417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.176454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.176565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.176598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.176731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.176766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.176942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.176975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.177150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.177184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.177418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.177490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.177632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.177669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.177939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.177951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:55.160 [2024-12-09 16:00:50.177974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.178179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.178213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.178443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.178477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.178667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.178699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.178884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.178917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.179049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.179082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.179187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.179230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.179418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.179451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.179557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.179590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.179772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.179806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.179918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.179951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.180125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.160 [2024-12-09 16:00:50.180160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.160 qpair failed and we were unable to recover it. 00:27:55.160 [2024-12-09 16:00:50.180284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.180318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.180532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.180565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.180686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.180719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.180847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.180880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.181118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.181150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.181328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.181362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.181550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.181584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.181758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.181792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.182075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.182107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.182281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.182315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.182454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.182487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.182685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.182718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.182958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.182991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.183103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.183137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.183290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.183324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.183509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.183542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.183665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.183704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.183815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.183850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.184032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.184065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.184305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.184340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.184542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.184574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.184837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.184870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.184996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.185028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.185211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.185258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.185381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.185415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.185620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.185654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.185919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.185952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.186150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.186185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.186321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.186359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.186616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.186649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.186857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.186891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.187086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.187120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.187238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.187272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.187378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.187411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.187583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.187618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.187797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.187831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.187950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.187981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.188193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.188254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.188453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.161 [2024-12-09 16:00:50.188486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.161 qpair failed and we were unable to recover it. 00:27:55.161 [2024-12-09 16:00:50.188753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.188787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.189038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.189072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.189191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.189234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.189355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.189388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.189612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.189652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.189940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.189974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.190138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.190172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.190373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.190406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.190597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.190630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.190882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.190914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.191104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.191136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.191327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.191362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.191577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.191610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.191721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.191755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.191889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.191921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.192058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.192091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.192327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.192362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.192542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.192582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.192756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.192789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.192903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.192936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.193109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.193143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.193257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.193294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.193469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.193501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.193618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.193650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.193771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.193805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.193986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.194019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.194208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.194251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.194518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.194551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.194744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.194776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.194958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.194992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.195188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.195231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.195368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.195401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.195527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.195559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.195746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.195780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.195972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.162 [2024-12-09 16:00:50.196004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.162 qpair failed and we were unable to recover it. 00:27:55.162 [2024-12-09 16:00:50.196197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.196240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.196360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.196394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.196581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.196615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.196750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.196784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.196923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.196955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.197135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.197169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.197433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.197467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.197579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.197611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.197868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.197900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.198136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.198188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.198348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.198395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.198593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.198627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.198802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.198835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.199015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.199048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.199272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.199308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.199427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.199462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.199637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.199671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.199912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.199945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.200235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.200269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.200388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.200422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.200541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.200574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.200756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.200790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.200918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.200952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.201172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.201206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.201324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.201355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.201473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.201507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.201689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.201722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.201894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.201928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.202121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.202154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.202342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.202376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.202550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.202593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.202699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.202738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.202850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.202883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.203006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.203039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.203158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.203190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.203335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.203369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.203485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.203530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.203773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.203805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.203986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.163 [2024-12-09 16:00:50.204018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.163 qpair failed and we were unable to recover it. 00:27:55.163 [2024-12-09 16:00:50.204277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.204314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.204492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.204526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.204649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.204681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.204881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.204913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.205092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.205124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.205413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.205447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.205626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.205659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.205825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.205859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.205984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.206018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.206237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.206272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.206395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.206429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.206550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.206583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.206699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.206732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.206842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.206875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.206993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.207027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.207216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.207259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.207357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.207389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.207565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.207598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.207877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.207910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.208099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.208133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.208270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.208305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.208569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.208602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.208708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.208741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.208912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.208944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.209116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.209155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.209268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.209299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.209520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.209553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.209748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.209782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.210020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.210053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.210165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.210197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.210333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.210366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.210542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.210575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.210763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.210796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.210904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.210938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.211128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.211162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.211347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.211381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.211499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.211533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.211706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.211739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.211869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.211908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.164 [2024-12-09 16:00:50.212087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.164 [2024-12-09 16:00:50.212122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.164 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.212302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.212337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.212540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.212572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.212817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.212851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.212956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.212988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.213176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.213208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.213366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.213400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.213506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.213540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.213644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.213677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.213848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.213882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.214063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.214096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.214287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.214322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.214508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.214554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.214745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.214777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.214960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.214992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.215165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.215197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.215381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.215417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.215661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.215697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.215882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.215917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.216101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.216141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.216328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.216364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.216578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.216611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.216788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.216821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.216995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.217029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.217205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.217248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.217382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.217417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.217534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.217568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.217768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.217803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.217979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.218012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.218211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.218262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.218340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.165 [2024-12-09 16:00:50.218366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.165 [2024-12-09 16:00:50.218373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:55.165 [2024-12-09 16:00:50.218379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:55.165 [2024-12-09 16:00:50.218384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.165 [2024-12-09 16:00:50.218444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.218475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.218740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.218773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.218950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.218983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.219164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.219197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.219396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.219429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.219637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.219671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.219919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.165 [2024-12-09 16:00:50.219952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.165 qpair failed and we were unable to recover it. 00:27:55.165 [2024-12-09 16:00:50.219888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:55.165 [2024-12-09 16:00:50.219994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:55.165 [2024-12-09 16:00:50.220111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.220121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:55.166 [2024-12-09 16:00:50.220156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.220122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:55.166 [2024-12-09 16:00:50.220372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.220408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.220602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.220636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.220877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.220909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.221094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.221128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.221319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.221355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.221552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.221586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.221705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.221738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.221913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.221946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.222121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.222154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.222273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.222307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.222499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.222532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.222789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.222839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.223014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.223046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.223291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.223326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.223515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.223548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.223797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.223830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.224038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.224070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.224267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.224302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.224489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.224523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.224787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.224822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.224950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.224983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.225174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.225207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.225462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.225496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.225615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.225649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.225917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.225951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.226130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.226163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.226414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.226448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.226644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.226675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.226955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.226988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.227263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.227298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.227484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.227518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.227696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.227728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.227909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.227942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.228073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.228106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.228279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.166 [2024-12-09 16:00:50.228312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.166 qpair failed and we were unable to recover it. 00:27:55.166 [2024-12-09 16:00:50.228435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.228468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.228586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.228618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.228791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.228824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.229021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.229056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.229306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.229340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.229454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.229487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.229674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.229707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.230003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.230036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.230154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.230188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.230401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.230436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.230613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.230647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.230836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.230868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.231053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.231087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.231209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.231253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.231437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.231470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.231711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.231743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.231880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.231918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.232111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.232145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.232347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.232381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.232621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.232652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.232893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.232926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.233227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.233262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.233446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.233480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.233665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.233700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.233897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.233932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.234133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.234168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.234388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.234423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.234544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.234576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.234790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.234824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.234965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.234998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.235190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.235232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.235412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.235446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.235690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.235725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.235852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.235885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.236124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.236158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.236276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.236309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.236553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.167 [2024-12-09 16:00:50.236588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.167 qpair failed and we were unable to recover it. 00:27:55.167 [2024-12-09 16:00:50.236773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.236807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.236999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.237033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.237214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.237257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.237384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.237418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.237617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.237651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.237768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.237802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.237913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.237946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.238186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.238239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.238378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.238409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.238522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.238557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.238732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.238766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.238979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.239012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.239133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.239165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.239384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.239418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.239613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.239646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.239778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.239811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.239993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.240027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.240211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.240255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.240469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.240502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.240743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.240784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.241006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.241039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.241241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.241277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.241412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.241444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.241686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.241719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.241849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.241883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.241988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.242021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.242301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.242335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.242605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.242637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.242824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.242857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.243115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.243148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.243268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.243301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.243423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.243456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.243712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.243745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.243879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.243913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.244091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.244124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.244334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.244368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.244565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.244598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.244794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.244829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.245045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.168 [2024-12-09 16:00:50.245078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.168 qpair failed and we were unable to recover it. 00:27:55.168 [2024-12-09 16:00:50.245261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.245298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.245496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.245528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.245719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.245752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.246005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.246039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.246211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.246261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.246526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.246559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.246822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.246856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.247131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.247166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.247412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.247447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.247734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.247768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.248019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.248053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.248264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.248298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.248545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.248578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.248764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.248798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.249057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.249089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.249214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.249258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.249471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.249504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.249709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.249742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.249988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.250021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.250151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.250183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.250408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.250449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.250741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.250773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.250963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.250996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.251241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.251276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.251566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.251599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.251800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.251831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.252018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.252052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.252323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.252357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.252549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.252582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.252768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.252801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.253040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.253073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.253252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.253286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.253463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.253495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.253603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.253633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.253827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.253859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.254127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.254159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.254398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.254433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.254611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.254643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.254835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.169 [2024-12-09 16:00:50.254867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.169 qpair failed and we were unable to recover it. 00:27:55.169 [2024-12-09 16:00:50.255123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.255157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.255354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.255388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.255660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.255693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.255955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.255987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.256244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.256280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.256424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.256456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.256636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.256668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.256789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.256821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.256948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.256981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.257177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.257209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.257410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.257444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.257628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.257661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.257851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.257884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.258142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.258174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.258491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.258524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.258776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.258808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.259010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.259043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.259255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.259289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.259434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.259467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.259640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.259672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.259845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.259877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.260125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.260163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.260289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.260322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.260599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.260632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.260894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.260927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.261107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.261140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.261326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.261360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.261550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.261582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.261758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.261791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.261965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.261996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.262111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.262142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.262408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.262442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.262618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.262651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.262770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.262802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.262922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.262954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.263125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.263157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.263428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.263462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.263604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.263638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.263858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.170 [2024-12-09 16:00:50.263890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.170 qpair failed and we were unable to recover it. 00:27:55.170 [2024-12-09 16:00:50.264024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.264057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.264243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.264277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.264416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.264450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.264571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.264603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.264785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.264818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.265007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.265040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.265230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.265266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.265451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.265484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.265601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.265634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.265863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.265924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.266053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.266087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.266271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.266307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.266432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.266465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.266706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.266740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.266928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.266962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.267149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.267181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.267311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.267345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.267475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.267508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.267685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.267718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.267961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.267994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.268257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.268291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.268530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.268563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.268747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.268780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.268973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.269007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.269124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.269156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.269352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.269386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.269593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.269626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.269766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.269800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.270073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.270105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.270370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.270407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.270600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.270637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.270902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.270940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.271124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.271160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.271421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.271459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.171 [2024-12-09 16:00:50.271730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.171 [2024-12-09 16:00:50.271767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.171 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.271956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.271991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.272155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.272198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.272325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.272360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.272466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.272499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.272752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.272787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.273028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.273063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.273240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.273277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.273449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.273483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.273749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.273782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.273915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.273948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.274063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.274095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.274270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.274305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.274490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.274524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.274722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.274755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.274858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.274889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.275157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.275192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.275460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.275492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.275668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.275702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.275968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.276001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.276265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.276300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.276421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.276454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.276717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.276751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.276934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.276967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.277205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.277250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.277471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.277505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.277669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.277704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.277944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.277977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.278183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.278233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.278493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.278533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.278808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.278840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.279013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.279046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.279251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.279290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.279413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.279446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.279688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.279721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.279983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.280017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.280197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.280239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.280405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.280463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.280655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.280689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.280874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.280908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.172 qpair failed and we were unable to recover it. 00:27:55.172 [2024-12-09 16:00:50.281010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.172 [2024-12-09 16:00:50.281042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.281161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.281195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.281366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.281399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.281588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.281620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.281771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.281804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.281909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.281942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.282195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.282242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.282404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.282437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.282721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.282756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.282927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.282959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.283202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.283247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.283487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.283521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.283698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.283731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.284011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.284044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.284234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.284269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.284458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.284490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.284729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.284769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.284952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.284984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.285245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.285279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.285413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.285445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.285662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.285695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.285886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.285917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.286102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.286135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.286372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.286407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.286594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.286626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.286815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.286848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.287086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.287118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.287291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.287325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.287505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.287538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.287803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.287835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.288081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.288115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.288362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.288396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.288587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.288619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.288881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.288914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.289199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.289253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.289492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.289524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.289735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.289767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.289948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.289981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.290121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.290154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.173 qpair failed and we were unable to recover it. 00:27:55.173 [2024-12-09 16:00:50.290354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.173 [2024-12-09 16:00:50.290388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.290649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.290683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.290923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.290956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.291164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.291197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.291430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.291464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.291734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.291768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.292070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.292102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.292276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.292311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.292530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.292563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.292804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.292838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.293036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.293069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.293250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.293284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.293525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.293558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.293837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.293870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.294091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.294123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.294366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.294401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.294529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.294561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.294691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.294729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.294958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.294991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.295173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.295205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.295341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.295375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.295530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.295564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.295747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.295780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.296051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.296085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.296266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.296299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.296536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.296569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.296718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.296753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.296958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.296990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.297194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.297237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.297376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.297409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.297595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.297631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.297907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.297940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.298207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.298252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.298513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.298546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.298782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.298814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.299060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.299093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.299286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.299320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.299558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.299590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.299824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.299858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.174 [2024-12-09 16:00:50.299982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.174 [2024-12-09 16:00:50.300015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.174 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.300214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.300255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.300438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.300469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.300653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.300686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.300893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.300926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.301123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.301157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.301368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.301402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.301558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.301590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.301829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.301862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.302090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.302123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.302263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.302298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.302408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.302441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.302565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.302597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.302789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.302821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.303067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.303101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.303240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.303275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.303513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.303546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.303668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.303700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.303889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.303927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.304109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.304142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.304382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.304415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.304681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.304713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.304943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.304975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.305242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.305276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.305410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.305444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.305631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.305664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.305858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.305890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.306079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.306111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.306361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.306396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.306529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.306561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.306691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.306724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.307012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.307043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.307348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.307381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.307638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.307671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.307973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.308005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.308266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.175 [2024-12-09 16:00:50.308299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.175 qpair failed and we were unable to recover it. 00:27:55.175 [2024-12-09 16:00:50.308466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.308499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.308628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.308661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.308860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.308891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.309100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.309132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.309316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.309350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.309528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.309560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.309734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.309766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.309995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.310028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.310293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.310328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.310530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.310577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.310877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.310911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.311099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.311133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.311319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.311354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.311638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.311670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.311791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.311824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.312017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.312051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.312336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.312371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.312508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.312542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.312783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.312817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.312992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.313025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.313315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.313351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.313484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.313517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.313774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.313807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.314082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.314116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.314348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.314383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.314577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.314609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.314859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.314891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.315148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.315181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.315326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.315360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.315505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.315539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.315712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.315743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.315935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.315969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.316102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.316135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.316330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.316365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.316481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.316514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.316697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.316730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.316915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.316955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.317227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.317263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.317474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.176 [2024-12-09 16:00:50.317507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.176 qpair failed and we were unable to recover it. 00:27:55.176 [2024-12-09 16:00:50.317677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.317710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.317916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.317948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.318212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.318261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.318530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.318563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.318701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.318735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.318992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.319025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.319310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.319345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.319535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.319569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.319748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.319781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.319988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.320021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.320211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.320255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.320444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.320478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.320738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.320771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.321007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.321039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.321248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.321282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.321474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.321507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.321754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.321787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.322048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.322082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.322358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.322393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.322670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.322703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.322915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.322948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.323072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.323106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.323299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.323333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.323471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.323504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.177 [2024-12-09 16:00:50.323785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.323818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:55.177 [2024-12-09 16:00:50.324090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.324124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.324357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.324392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:55.177 [2024-12-09 16:00:50.324570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.324604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.324729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:55.177 [2024-12-09 16:00:50.324761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.325013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.177 [2024-12-09 16:00:50.325047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.325304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.325338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.325519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.325552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.325747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.325780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.325995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.326028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.326198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.326246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.326415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.326448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.326579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.326611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.177 [2024-12-09 16:00:50.326739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.177 [2024-12-09 16:00:50.326772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.177 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.327083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.327117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.327289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.327323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.327431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.327465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.327707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.327740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.327959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.327992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.328259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.328293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.328475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.328511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.328770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.328803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.328926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.328961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.329102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.329136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.329278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.329313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.329417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.329450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.329655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.329688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.329805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.329838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.330119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.330152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.330331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.330364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.330600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.330633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.330819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.330853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.331045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.331078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.331296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.331329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.331521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.331554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.331740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.331772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.332048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.332081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.332279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.332313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.332552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.332585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.332787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.332824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.333040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.333073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.333264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.333299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.333414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.333446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.333683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.333716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.333958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.333991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.334186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.334229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.334487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.334521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.334700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.334735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.334929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.334963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.335235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.335269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.335467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.335499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.335674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.335708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.335885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-12-09 16:00:50.335925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-12-09 16:00:50.336177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.336211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.336458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.336492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.336682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.336718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.336991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.337023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.337229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.337266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.337456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.337489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.337682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.337716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.337862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.337895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.338104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.338137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.338329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.338364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.338561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.338595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.338771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.338803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.339078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.339113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.339309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.339342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.339525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.339559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.339735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.339768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.340034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.340068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.340263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.340297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.340478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.340511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.340754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.340787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.340992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.341026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.341207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.341249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.341531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.341564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.341747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.341780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.341969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.342003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.342215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.342259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.342396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.342430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.342703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.342738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.343003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.343037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.343333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.343368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.343485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.343517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.343708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.343744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.343985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.344018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.344285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.344319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.344442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.344475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.344666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.344700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.344986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.345020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.345283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.345318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.345469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.345504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-12-09 16:00:50.345643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-12-09 16:00:50.345677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.180 [2024-12-09 16:00:50.345971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.180 [2024-12-09 16:00:50.346010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.180 qpair failed and we were unable to recover it. 00:27:55.180 [2024-12-09 16:00:50.346147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.180 [2024-12-09 16:00:50.346182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.180 qpair failed and we were unable to recover it. 00:27:55.180 [2024-12-09 16:00:50.346442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.180 [2024-12-09 16:00:50.346476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.180 qpair failed and we were unable to recover it. 00:27:55.180 [2024-12-09 16:00:50.346691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.346724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.347060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.347094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.347281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.347315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.347504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.347539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.347720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.347754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.348038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.348071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.348249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.348284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.348409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.348441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.348626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.348659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.348795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.348830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.349021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.349055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.349202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.349245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.349433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.349466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.349714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.349747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.349936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.349970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.350096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.350130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.350268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.350304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.350493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.350525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.350789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.350822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-12-09 16:00:50.351082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.444 [2024-12-09 16:00:50.351115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.351302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.351337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.351545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.351578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.351703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.351736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.351986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.352019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.352211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.352265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.352462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.352496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.352624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.352656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.352877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.352911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.353228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.353263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.353394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.353428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.353643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.353676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.353958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.353991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.354260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.354295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.354443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.354477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.354584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.354617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.354740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.354773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.355077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.355110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.355315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.355349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.355497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.355548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.355667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.355701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.356004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.356037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.356291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.356326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.356502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.356537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.356719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.356753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.357009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.357043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.357243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.357276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.357558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.357590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.357764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.357796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.358087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.358120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.358339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.358372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.358512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.358545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.358755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.358796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.359089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.359123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.359433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.359467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.359604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.359636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.359756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.359788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.359912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.359946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.445 qpair failed and we were unable to recover it. 00:27:55.445 [2024-12-09 16:00:50.360070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.445 [2024-12-09 16:00:50.360103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.446 [2024-12-09 16:00:50.360308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.360345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.360487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.360520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.360642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:55.446 [2024-12-09 16:00:50.360676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.360887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.360919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.446 [2024-12-09 16:00:50.361194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.361239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.446 [2024-12-09 16:00:50.361489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.361524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.361784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.361816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.362002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.362034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.362156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.362190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.362389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.362420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.362611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.362643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.362834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.362868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.363047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.363080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.363299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.363333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.363514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.363548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.363761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.363794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.363975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.364008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.364199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.364243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fded8000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.364453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.364501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdee4000b90 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.364640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.364676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.364984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.365017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.365300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.365334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.365468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.365501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.365685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.365718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.365987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.366019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.366208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.366249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.366443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.366474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.366731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.366764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.367044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.367076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.367257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.367291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.367533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.367567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.367751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.367784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.368036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.368069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.368253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.368289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.368475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.368508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.368743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.446 [2024-12-09 16:00:50.368777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.446 qpair failed and we were unable to recover it. 00:27:55.446 [2024-12-09 16:00:50.368911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.368944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.369128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.369162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.369419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.369453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.369692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.369725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.369997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.370028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.370325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.370359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.370653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.370685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.370970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.371003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.371250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.371285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.371415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.371454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.371588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.371621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.371749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.371782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.371963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.371997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.372249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.372282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.372458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.372491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.372729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.372761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.372963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.372995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.373237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.373270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.373391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.373424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.373681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.373715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.373931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.373964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.374202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.374244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.374507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.374541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.374730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.374764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.375051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.375085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.375267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.375302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.375491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.375523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.375815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.375849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.376113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.376146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.376408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.376443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.376572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.376605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.376818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.376851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.377088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.377120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.377355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.377389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.377654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.377687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.377823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.377856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.378118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.378152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.378349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.378384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.378575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.378608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.447 [2024-12-09 16:00:50.378868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.447 [2024-12-09 16:00:50.378902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.447 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.379180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.379213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.379492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.379526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.379783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.379815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.380079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.380112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.380329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.380364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.380586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.380619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.380903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.380936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.381176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.381210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.381419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.381452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.381716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.381749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.382020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.382059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.382289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.382325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.382611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.382645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.382917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.382950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.383225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.383260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.383407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.383441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.383633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.383666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.383856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.383890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.384101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.384135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.384336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.384371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.384542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.384576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.384814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.384849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.385104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.385138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.385335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.385376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.385639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.385672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.385854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.385888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.386126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.386159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.386366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.386401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.386639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.386671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.386972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.387006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.387136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.387168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.387321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.387354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.387536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.387568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.387754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.387787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.388000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.448 [2024-12-09 16:00:50.388033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.448 qpair failed and we were unable to recover it. 00:27:55.448 [2024-12-09 16:00:50.388273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.388307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.388592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.388623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.388920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.388953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 Malloc0 00:27:55.449 [2024-12-09 16:00:50.389193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.389248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.389390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.389423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.389660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.389693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.449 [2024-12-09 16:00:50.389955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.389987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.390233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.390267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:55.449 [2024-12-09 16:00:50.390460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.390494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.449 [2024-12-09 16:00:50.390676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.390711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.390886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.390918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.449 [2024-12-09 16:00:50.391158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.391191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.391487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.391520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.391744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.391782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.391953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.391986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.392158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.392190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.392489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.392522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.392693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.392725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.392862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.392895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.393133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.393165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.393439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.393473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.393741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.393773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.393948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.393981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.394191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.394234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.394470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.394503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.394673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.394706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.394911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.394943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.395214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.395258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.395443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.395477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.395735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.395768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.395949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.395981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.396155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.396188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.396473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.396516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.396698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.396712] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.449 [2024-12-09 16:00:50.396731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.449 qpair failed and we were unable to recover it. 00:27:55.449 [2024-12-09 16:00:50.396970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.449 [2024-12-09 16:00:50.397002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.397262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.397297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.397472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.397504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.397744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.397777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.397904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.397937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.398215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.398262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.398469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.398503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.398635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.398668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.398965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.398998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.399169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.399202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.399347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.399381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.399590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.399623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.399794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.399827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.400012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.400046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.400216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.400257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.400453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.400486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.400749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.400782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.401070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.401104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.401370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.401406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.401536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.401570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.401755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.401789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.401996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.402029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.450 [2024-12-09 16:00:50.402292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.402327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.402564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:55.450 [2024-12-09 16:00:50.402597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.402867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.402900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.450 [2024-12-09 16:00:50.403187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.403229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.403429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.403463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.403649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.403682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.403810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.403843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.404014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.404047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.404226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.404261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.404503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.404542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.404778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.404811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.405053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.405086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.405200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.405244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.405509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.405542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.405844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.405877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.406135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.406169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.450 qpair failed and we were unable to recover it. 00:27:55.450 [2024-12-09 16:00:50.406322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.450 [2024-12-09 16:00:50.406358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.406536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.406569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.406757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.406789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.407025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.407059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.407255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.407289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.407527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.407559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.407822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.407854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.408102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.408136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.408305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.408338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.408625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.408658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.408901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.408934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.409118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.409152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.409390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.409424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.409625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.409658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.409943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.409976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.451 [2024-12-09 16:00:50.410242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.410276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.410569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:55.451 [2024-12-09 16:00:50.410602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.410834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.410868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.451 [2024-12-09 16:00:50.411055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.411088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.451 [2024-12-09 16:00:50.411272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.411306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.411441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.411475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.411671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.411704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.411943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.411975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.412238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.412273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.412558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.412591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.412870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.412902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.413140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.413173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.413377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.413411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.413515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.413546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.413748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.413782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.413975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.414008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.414182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.414215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef2500 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.414445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.414484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.414669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.414702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.414882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.414914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.415177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.415210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.415401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.415434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.415657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.451 [2024-12-09 16:00:50.415689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.451 qpair failed and we were unable to recover it. 00:27:55.451 [2024-12-09 16:00:50.415965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.415996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.416118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.416151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.416412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.416446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.416580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.416612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.416879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.416912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.417185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.417228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.417418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.417450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.417637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.417677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.417916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.417949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.452 [2024-12-09 16:00:50.418240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.418275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.418457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.418488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:55.452 [2024-12-09 16:00:50.418731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.418763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.452 [2024-12-09 16:00:50.419030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.419063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.452 [2024-12-09 16:00:50.419349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.419383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.419570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.419603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.419887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.419919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.420050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.420081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.420343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.420376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.420564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.420597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.420861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.420893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.421077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.421109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.421290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.421324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.421532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.452 [2024-12-09 16:00:50.421564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdedc000b90 with addr=10.0.0.2, port=4420 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.421657] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.452 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.452 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:55.452 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.452 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.452 [2024-12-09 16:00:50.427387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.452 [2024-12-09 16:00:50.427494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.452 [2024-12-09 16:00:50.427539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.452 [2024-12-09 16:00:50.427562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.452 [2024-12-09 16:00:50.427582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.452 [2024-12-09 16:00:50.427633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.452 16:00:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2157886 00:27:55.452 [2024-12-09 16:00:50.437338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.452 [2024-12-09 16:00:50.437421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.452 [2024-12-09 16:00:50.437448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.452 [2024-12-09 16:00:50.437463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.452 [2024-12-09 16:00:50.437476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.452 [2024-12-09 16:00:50.437507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.447317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.452 [2024-12-09 16:00:50.447391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.452 [2024-12-09 16:00:50.447410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.452 [2024-12-09 16:00:50.447421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.452 [2024-12-09 16:00:50.447430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.452 [2024-12-09 16:00:50.447452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.452 [2024-12-09 16:00:50.457241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.452 [2024-12-09 16:00:50.457310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.452 [2024-12-09 16:00:50.457324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.452 [2024-12-09 16:00:50.457331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.452 [2024-12-09 16:00:50.457337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.452 [2024-12-09 16:00:50.457352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.452 qpair failed and we were unable to recover it. 00:27:55.453 [2024-12-09 16:00:50.467241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.453 [2024-12-09 16:00:50.467300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.453 [2024-12-09 16:00:50.467313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.453 [2024-12-09 16:00:50.467320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.453 [2024-12-09 16:00:50.467326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.453 [2024-12-09 16:00:50.467341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.453 qpair failed and we were unable to recover it. 00:27:55.453 [2024-12-09 16:00:50.477311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.453 [2024-12-09 16:00:50.477402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.453 [2024-12-09 16:00:50.477415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.453 [2024-12-09 16:00:50.477422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.453 [2024-12-09 16:00:50.477429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.453 [2024-12-09 16:00:50.477443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.453 qpair failed and we were unable to recover it. 00:27:55.453 [2024-12-09 16:00:50.487322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.453 [2024-12-09 16:00:50.487375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.453 [2024-12-09 16:00:50.487391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.453 [2024-12-09 16:00:50.487398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.453 [2024-12-09 16:00:50.487405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.453 [2024-12-09 16:00:50.487420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.453 qpair failed and we were unable to recover it. 00:27:55.453 [2024-12-09 16:00:50.497421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.453 [2024-12-09 16:00:50.497480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.453 [2024-12-09 16:00:50.497494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.453 [2024-12-09 16:00:50.497501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.453 [2024-12-09 16:00:50.497507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.453 [2024-12-09 16:00:50.497523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.453 qpair failed and we were unable to recover it. 00:27:55.453 [2024-12-09 16:00:50.507406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.453 [2024-12-09 16:00:50.507470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.453 [2024-12-09 16:00:50.507484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.453 [2024-12-09 16:00:50.507491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.453 [2024-12-09 16:00:50.507497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.453 [2024-12-09 16:00:50.507512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.453 qpair failed and we were unable to recover it. 00:27:55.453 [2024-12-09 16:00:50.517421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.453 [2024-12-09 16:00:50.517470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.453 [2024-12-09 16:00:50.517483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.453 [2024-12-09 16:00:50.517490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.453 [2024-12-09 16:00:50.517496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.453 [2024-12-09 16:00:50.517512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.453 qpair failed and we were unable to recover it. 00:27:55.453 [2024-12-09 16:00:50.527394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.453 [2024-12-09 16:00:50.527474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.453 [2024-12-09 16:00:50.527487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.453 [2024-12-09 16:00:50.527495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.453 [2024-12-09 16:00:50.527504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.453 [2024-12-09 16:00:50.527519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.453 qpair failed and we were unable to recover it. 00:27:55.453 [2024-12-09 16:00:50.537459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.453 [2024-12-09 16:00:50.537518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.453 [2024-12-09 16:00:50.537533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.453 [2024-12-09 16:00:50.537540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.453 [2024-12-09 16:00:50.537546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.453 [2024-12-09 16:00:50.537561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.453 qpair failed and we were unable to recover it. 00:27:55.453 [2024-12-09 16:00:50.547485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.453 [2024-12-09 16:00:50.547537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.453 [2024-12-09 16:00:50.547551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.453 [2024-12-09 16:00:50.547558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.453 [2024-12-09 16:00:50.547564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.453 [2024-12-09 16:00:50.547579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.453 qpair failed and we were unable to recover it. 00:27:55.453 [2024-12-09 16:00:50.557509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.453 [2024-12-09 16:00:50.557559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.453 [2024-12-09 16:00:50.557572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.453 [2024-12-09 16:00:50.557579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.453 [2024-12-09 16:00:50.557585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.453 [2024-12-09 16:00:50.557600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.453 qpair failed and we were unable to recover it. 00:27:55.453 [2024-12-09 16:00:50.567530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.453 [2024-12-09 16:00:50.567583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.453 [2024-12-09 16:00:50.567597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.453 [2024-12-09 16:00:50.567604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.453 [2024-12-09 16:00:50.567610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.453 [2024-12-09 16:00:50.567625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.453 qpair failed and we were unable to recover it. 00:27:55.453 [2024-12-09 16:00:50.577557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.453 [2024-12-09 16:00:50.577631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.453 [2024-12-09 16:00:50.577644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.453 [2024-12-09 16:00:50.577651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.453 [2024-12-09 16:00:50.577658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.453 [2024-12-09 16:00:50.577674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.453 qpair failed and we were unable to recover it. 00:27:55.454 [2024-12-09 16:00:50.587589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.454 [2024-12-09 16:00:50.587640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.454 [2024-12-09 16:00:50.587654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.454 [2024-12-09 16:00:50.587662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.454 [2024-12-09 16:00:50.587670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.454 [2024-12-09 16:00:50.587686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.454 qpair failed and we were unable to recover it. 00:27:55.454 [2024-12-09 16:00:50.597594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.454 [2024-12-09 16:00:50.597648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.454 [2024-12-09 16:00:50.597661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.454 [2024-12-09 16:00:50.597668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.454 [2024-12-09 16:00:50.597675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.454 [2024-12-09 16:00:50.597690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.454 qpair failed and we were unable to recover it. 00:27:55.454 [2024-12-09 16:00:50.607657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.454 [2024-12-09 16:00:50.607728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.454 [2024-12-09 16:00:50.607742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.454 [2024-12-09 16:00:50.607749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.454 [2024-12-09 16:00:50.607755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.454 [2024-12-09 16:00:50.607770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.454 qpair failed and we were unable to recover it. 00:27:55.454 [2024-12-09 16:00:50.617677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.454 [2024-12-09 16:00:50.617732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.454 [2024-12-09 16:00:50.617748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.454 [2024-12-09 16:00:50.617756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.454 [2024-12-09 16:00:50.617762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.454 [2024-12-09 16:00:50.617777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.454 qpair failed and we were unable to recover it. 00:27:55.454 [2024-12-09 16:00:50.627737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.454 [2024-12-09 16:00:50.627791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.454 [2024-12-09 16:00:50.627803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.454 [2024-12-09 16:00:50.627810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.454 [2024-12-09 16:00:50.627816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.454 [2024-12-09 16:00:50.627832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.454 qpair failed and we were unable to recover it. 00:27:55.454 [2024-12-09 16:00:50.637731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.454 [2024-12-09 16:00:50.637787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.454 [2024-12-09 16:00:50.637800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.454 [2024-12-09 16:00:50.637807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.454 [2024-12-09 16:00:50.637813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.454 [2024-12-09 16:00:50.637828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.454 qpair failed and we were unable to recover it. 00:27:55.454 [2024-12-09 16:00:50.647773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.454 [2024-12-09 16:00:50.647824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.454 [2024-12-09 16:00:50.647837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.454 [2024-12-09 16:00:50.647845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.454 [2024-12-09 16:00:50.647851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.454 [2024-12-09 16:00:50.647866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.454 qpair failed and we were unable to recover it. 00:27:55.454 [2024-12-09 16:00:50.657746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.454 [2024-12-09 16:00:50.657801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.454 [2024-12-09 16:00:50.657814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.454 [2024-12-09 16:00:50.657823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.454 [2024-12-09 16:00:50.657830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.454 [2024-12-09 16:00:50.657845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.454 qpair failed and we were unable to recover it. 00:27:55.714 [2024-12-09 16:00:50.667820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.714 [2024-12-09 16:00:50.667874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.714 [2024-12-09 16:00:50.667888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.714 [2024-12-09 16:00:50.667894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.714 [2024-12-09 16:00:50.667900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.714 [2024-12-09 16:00:50.667915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.714 qpair failed and we were unable to recover it. 00:27:55.714 [2024-12-09 16:00:50.677846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.714 [2024-12-09 16:00:50.677905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.714 [2024-12-09 16:00:50.677918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.714 [2024-12-09 16:00:50.677925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.714 [2024-12-09 16:00:50.677931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.714 [2024-12-09 16:00:50.677946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.714 qpair failed and we were unable to recover it. 00:27:55.714 [2024-12-09 16:00:50.687867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.714 [2024-12-09 16:00:50.687922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.714 [2024-12-09 16:00:50.687935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.714 [2024-12-09 16:00:50.687942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.714 [2024-12-09 16:00:50.687948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.714 [2024-12-09 16:00:50.687963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.714 qpair failed and we were unable to recover it. 00:27:55.714 [2024-12-09 16:00:50.697922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.714 [2024-12-09 16:00:50.697984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.714 [2024-12-09 16:00:50.697997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.714 [2024-12-09 16:00:50.698004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.714 [2024-12-09 16:00:50.698010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.714 [2024-12-09 16:00:50.698025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.714 qpair failed and we were unable to recover it. 00:27:55.714 [2024-12-09 16:00:50.707941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.714 [2024-12-09 16:00:50.707999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.714 [2024-12-09 16:00:50.708012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.714 [2024-12-09 16:00:50.708019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.714 [2024-12-09 16:00:50.708026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.714 [2024-12-09 16:00:50.708040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.714 qpair failed and we were unable to recover it. 00:27:55.715 [2024-12-09 16:00:50.717933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.715 [2024-12-09 16:00:50.718011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.715 [2024-12-09 16:00:50.718024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.715 [2024-12-09 16:00:50.718031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.715 [2024-12-09 16:00:50.718038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.715 [2024-12-09 16:00:50.718052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.715 qpair failed and we were unable to recover it. 00:27:55.715 [2024-12-09 16:00:50.727918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.715 [2024-12-09 16:00:50.727977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.715 [2024-12-09 16:00:50.727991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.715 [2024-12-09 16:00:50.727998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.715 [2024-12-09 16:00:50.728004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.715 [2024-12-09 16:00:50.728018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.715 qpair failed and we were unable to recover it. 00:27:55.715 [2024-12-09 16:00:50.738017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.715 [2024-12-09 16:00:50.738076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.715 [2024-12-09 16:00:50.738089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.715 [2024-12-09 16:00:50.738095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.715 [2024-12-09 16:00:50.738102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.715 [2024-12-09 16:00:50.738116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.715 qpair failed and we were unable to recover it. 00:27:55.715 [2024-12-09 16:00:50.747990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.715 [2024-12-09 16:00:50.748055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.715 [2024-12-09 16:00:50.748068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.715 [2024-12-09 16:00:50.748075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.715 [2024-12-09 16:00:50.748081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.715 [2024-12-09 16:00:50.748096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.715 qpair failed and we were unable to recover it. 00:27:55.715 [2024-12-09 16:00:50.758066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.715 [2024-12-09 16:00:50.758121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.715 [2024-12-09 16:00:50.758134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.715 [2024-12-09 16:00:50.758141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.715 [2024-12-09 16:00:50.758148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.715 [2024-12-09 16:00:50.758162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.715 qpair failed and we were unable to recover it. 00:27:55.715 [2024-12-09 16:00:50.768088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.715 [2024-12-09 16:00:50.768142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.715 [2024-12-09 16:00:50.768156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.715 [2024-12-09 16:00:50.768163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.715 [2024-12-09 16:00:50.768169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.715 [2024-12-09 16:00:50.768184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.715 qpair failed and we were unable to recover it. 00:27:55.715 [2024-12-09 16:00:50.778170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.715 [2024-12-09 16:00:50.778231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.715 [2024-12-09 16:00:50.778244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.715 [2024-12-09 16:00:50.778251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.715 [2024-12-09 16:00:50.778257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.715 [2024-12-09 16:00:50.778272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.715 qpair failed and we were unable to recover it. 00:27:55.715 [2024-12-09 16:00:50.788156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.715 [2024-12-09 16:00:50.788211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.715 [2024-12-09 16:00:50.788228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.715 [2024-12-09 16:00:50.788238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.715 [2024-12-09 16:00:50.788245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.715 [2024-12-09 16:00:50.788259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.715 qpair failed and we were unable to recover it. 00:27:55.715 [2024-12-09 16:00:50.798196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.715 [2024-12-09 16:00:50.798253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.715 [2024-12-09 16:00:50.798266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.715 [2024-12-09 16:00:50.798273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.715 [2024-12-09 16:00:50.798279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.715 [2024-12-09 16:00:50.798294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.715 qpair failed and we were unable to recover it. 00:27:55.715 [2024-12-09 16:00:50.808215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.715 [2024-12-09 16:00:50.808274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.715 [2024-12-09 16:00:50.808287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.715 [2024-12-09 16:00:50.808294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.715 [2024-12-09 16:00:50.808300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.715 [2024-12-09 16:00:50.808315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.715 qpair failed and we were unable to recover it. 00:27:55.715 [2024-12-09 16:00:50.818183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.715 [2024-12-09 16:00:50.818256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.715 [2024-12-09 16:00:50.818269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.715 [2024-12-09 16:00:50.818276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.715 [2024-12-09 16:00:50.818282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.715 [2024-12-09 16:00:50.818297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.715 qpair failed and we were unable to recover it. 00:27:55.715 [2024-12-09 16:00:50.828286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.715 [2024-12-09 16:00:50.828350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.715 [2024-12-09 16:00:50.828363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.715 [2024-12-09 16:00:50.828370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.715 [2024-12-09 16:00:50.828376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.715 [2024-12-09 16:00:50.828395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.715 qpair failed and we were unable to recover it. 00:27:55.715 [2024-12-09 16:00:50.838322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.715 [2024-12-09 16:00:50.838379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.715 [2024-12-09 16:00:50.838392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.715 [2024-12-09 16:00:50.838399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.715 [2024-12-09 16:00:50.838405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.715 [2024-12-09 16:00:50.838419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.715 qpair failed and we were unable to recover it. 00:27:55.715 [2024-12-09 16:00:50.848357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.715 [2024-12-09 16:00:50.848410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.716 [2024-12-09 16:00:50.848423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.716 [2024-12-09 16:00:50.848430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.716 [2024-12-09 16:00:50.848437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.716 [2024-12-09 16:00:50.848453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.716 qpair failed and we were unable to recover it. 00:27:55.716 [2024-12-09 16:00:50.858386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.716 [2024-12-09 16:00:50.858475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.716 [2024-12-09 16:00:50.858488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.716 [2024-12-09 16:00:50.858495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.716 [2024-12-09 16:00:50.858501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.716 [2024-12-09 16:00:50.858515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.716 qpair failed and we were unable to recover it. 00:27:55.716 [2024-12-09 16:00:50.868415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.716 [2024-12-09 16:00:50.868475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.716 [2024-12-09 16:00:50.868488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.716 [2024-12-09 16:00:50.868495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.716 [2024-12-09 16:00:50.868502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.716 [2024-12-09 16:00:50.868517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.716 qpair failed and we were unable to recover it. 00:27:55.716 [2024-12-09 16:00:50.878457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.716 [2024-12-09 16:00:50.878509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.716 [2024-12-09 16:00:50.878521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.716 [2024-12-09 16:00:50.878528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.716 [2024-12-09 16:00:50.878535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.716 [2024-12-09 16:00:50.878549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.716 qpair failed and we were unable to recover it. 00:27:55.716 [2024-12-09 16:00:50.888459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.716 [2024-12-09 16:00:50.888514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.716 [2024-12-09 16:00:50.888526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.716 [2024-12-09 16:00:50.888534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.716 [2024-12-09 16:00:50.888540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.716 [2024-12-09 16:00:50.888554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.716 qpair failed and we were unable to recover it. 00:27:55.716 [2024-12-09 16:00:50.898483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.716 [2024-12-09 16:00:50.898539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.716 [2024-12-09 16:00:50.898552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.716 [2024-12-09 16:00:50.898558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.716 [2024-12-09 16:00:50.898565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.716 [2024-12-09 16:00:50.898580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.716 qpair failed and we were unable to recover it. 00:27:55.716 [2024-12-09 16:00:50.908538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.716 [2024-12-09 16:00:50.908596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.716 [2024-12-09 16:00:50.908609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.716 [2024-12-09 16:00:50.908615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.716 [2024-12-09 16:00:50.908622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.716 [2024-12-09 16:00:50.908636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.716 qpair failed and we were unable to recover it. 00:27:55.716 [2024-12-09 16:00:50.918539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.716 [2024-12-09 16:00:50.918592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.716 [2024-12-09 16:00:50.918609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.716 [2024-12-09 16:00:50.918615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.716 [2024-12-09 16:00:50.918622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.716 [2024-12-09 16:00:50.918636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.716 qpair failed and we were unable to recover it. 00:27:55.716 [2024-12-09 16:00:50.928566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.716 [2024-12-09 16:00:50.928644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.716 [2024-12-09 16:00:50.928657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.716 [2024-12-09 16:00:50.928664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.716 [2024-12-09 16:00:50.928671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.716 [2024-12-09 16:00:50.928685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.716 qpair failed and we were unable to recover it. 00:27:55.716 [2024-12-09 16:00:50.938633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.716 [2024-12-09 16:00:50.938692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.716 [2024-12-09 16:00:50.938704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.716 [2024-12-09 16:00:50.938711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.716 [2024-12-09 16:00:50.938717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.716 [2024-12-09 16:00:50.938732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.716 qpair failed and we were unable to recover it. 00:27:55.976 [2024-12-09 16:00:50.948633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.976 [2024-12-09 16:00:50.948691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.976 [2024-12-09 16:00:50.948703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.976 [2024-12-09 16:00:50.948710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.976 [2024-12-09 16:00:50.948717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.976 [2024-12-09 16:00:50.948732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.976 qpair failed and we were unable to recover it. 00:27:55.976 [2024-12-09 16:00:50.958653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.976 [2024-12-09 16:00:50.958715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.976 [2024-12-09 16:00:50.958729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.976 [2024-12-09 16:00:50.958736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.976 [2024-12-09 16:00:50.958745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.976 [2024-12-09 16:00:50.958759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.976 qpair failed and we were unable to recover it. 00:27:55.976 [2024-12-09 16:00:50.968693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.976 [2024-12-09 16:00:50.968748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.976 [2024-12-09 16:00:50.968761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.976 [2024-12-09 16:00:50.968767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.976 [2024-12-09 16:00:50.968774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.976 [2024-12-09 16:00:50.968789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.976 qpair failed and we were unable to recover it. 00:27:55.976 [2024-12-09 16:00:50.978740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.976 [2024-12-09 16:00:50.978800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.976 [2024-12-09 16:00:50.978813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.976 [2024-12-09 16:00:50.978819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.976 [2024-12-09 16:00:50.978826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.976 [2024-12-09 16:00:50.978841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.976 qpair failed and we were unable to recover it. 00:27:55.976 [2024-12-09 16:00:50.988795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.976 [2024-12-09 16:00:50.988878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.976 [2024-12-09 16:00:50.988892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.976 [2024-12-09 16:00:50.988899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.977 [2024-12-09 16:00:50.988905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.977 [2024-12-09 16:00:50.988920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.977 qpair failed and we were unable to recover it. 00:27:55.977 [2024-12-09 16:00:50.998756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.977 [2024-12-09 16:00:50.998807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.977 [2024-12-09 16:00:50.998821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.977 [2024-12-09 16:00:50.998828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.977 [2024-12-09 16:00:50.998834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.977 [2024-12-09 16:00:50.998850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.977 qpair failed and we were unable to recover it. 00:27:55.977 [2024-12-09 16:00:51.008736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.977 [2024-12-09 16:00:51.008809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.977 [2024-12-09 16:00:51.008823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.977 [2024-12-09 16:00:51.008829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.977 [2024-12-09 16:00:51.008836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.977 [2024-12-09 16:00:51.008851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.977 qpair failed and we were unable to recover it. 00:27:55.977 [2024-12-09 16:00:51.018823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.977 [2024-12-09 16:00:51.018881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.977 [2024-12-09 16:00:51.018894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.977 [2024-12-09 16:00:51.018901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.977 [2024-12-09 16:00:51.018907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.977 [2024-12-09 16:00:51.018921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.977 qpair failed and we were unable to recover it. 00:27:55.977 [2024-12-09 16:00:51.028852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.977 [2024-12-09 16:00:51.028910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.977 [2024-12-09 16:00:51.028924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.977 [2024-12-09 16:00:51.028931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.977 [2024-12-09 16:00:51.028937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.977 [2024-12-09 16:00:51.028952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.977 qpair failed and we were unable to recover it. 00:27:55.977 [2024-12-09 16:00:51.038815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.977 [2024-12-09 16:00:51.038874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.977 [2024-12-09 16:00:51.038887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.977 [2024-12-09 16:00:51.038894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.977 [2024-12-09 16:00:51.038901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.977 [2024-12-09 16:00:51.038915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.977 qpair failed and we were unable to recover it. 00:27:55.977 [2024-12-09 16:00:51.048945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.977 [2024-12-09 16:00:51.048996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.977 [2024-12-09 16:00:51.049013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.977 [2024-12-09 16:00:51.049020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.977 [2024-12-09 16:00:51.049026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.977 [2024-12-09 16:00:51.049041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.977 qpair failed and we were unable to recover it. 00:27:55.977 [2024-12-09 16:00:51.058949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.977 [2024-12-09 16:00:51.059003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.977 [2024-12-09 16:00:51.059017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.977 [2024-12-09 16:00:51.059024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.977 [2024-12-09 16:00:51.059031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.977 [2024-12-09 16:00:51.059045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.977 qpair failed and we were unable to recover it. 00:27:55.977 [2024-12-09 16:00:51.068973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.977 [2024-12-09 16:00:51.069029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.977 [2024-12-09 16:00:51.069043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.977 [2024-12-09 16:00:51.069050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.977 [2024-12-09 16:00:51.069057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.977 [2024-12-09 16:00:51.069071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.977 qpair failed and we were unable to recover it. 00:27:55.977 [2024-12-09 16:00:51.079010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.977 [2024-12-09 16:00:51.079064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.977 [2024-12-09 16:00:51.079077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.977 [2024-12-09 16:00:51.079084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.977 [2024-12-09 16:00:51.079090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.977 [2024-12-09 16:00:51.079105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.977 qpair failed and we were unable to recover it. 00:27:55.977 [2024-12-09 16:00:51.089047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.977 [2024-12-09 16:00:51.089103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.977 [2024-12-09 16:00:51.089116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.977 [2024-12-09 16:00:51.089123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.977 [2024-12-09 16:00:51.089133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.977 [2024-12-09 16:00:51.089147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.977 qpair failed and we were unable to recover it. 00:27:55.977 [2024-12-09 16:00:51.099072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.977 [2024-12-09 16:00:51.099128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.977 [2024-12-09 16:00:51.099143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.977 [2024-12-09 16:00:51.099150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.977 [2024-12-09 16:00:51.099156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.977 [2024-12-09 16:00:51.099171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.977 qpair failed and we were unable to recover it. 00:27:55.977 [2024-12-09 16:00:51.109090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.977 [2024-12-09 16:00:51.109152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.977 [2024-12-09 16:00:51.109165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.977 [2024-12-09 16:00:51.109173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.977 [2024-12-09 16:00:51.109179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.977 [2024-12-09 16:00:51.109193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.977 qpair failed and we were unable to recover it. 00:27:55.977 [2024-12-09 16:00:51.119051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.977 [2024-12-09 16:00:51.119103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.977 [2024-12-09 16:00:51.119120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.977 [2024-12-09 16:00:51.119127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.977 [2024-12-09 16:00:51.119134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.978 [2024-12-09 16:00:51.119149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.978 qpair failed and we were unable to recover it. 00:27:55.978 [2024-12-09 16:00:51.129064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.978 [2024-12-09 16:00:51.129124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.978 [2024-12-09 16:00:51.129138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.978 [2024-12-09 16:00:51.129145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.978 [2024-12-09 16:00:51.129152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.978 [2024-12-09 16:00:51.129167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.978 qpair failed and we were unable to recover it. 00:27:55.978 [2024-12-09 16:00:51.139200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.978 [2024-12-09 16:00:51.139268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.978 [2024-12-09 16:00:51.139283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.978 [2024-12-09 16:00:51.139290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.978 [2024-12-09 16:00:51.139297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.978 [2024-12-09 16:00:51.139312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.978 qpair failed and we were unable to recover it. 00:27:55.978 [2024-12-09 16:00:51.149128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.978 [2024-12-09 16:00:51.149182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.978 [2024-12-09 16:00:51.149196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.978 [2024-12-09 16:00:51.149202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.978 [2024-12-09 16:00:51.149209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.978 [2024-12-09 16:00:51.149228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.978 qpair failed and we were unable to recover it. 00:27:55.978 [2024-12-09 16:00:51.159165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.978 [2024-12-09 16:00:51.159221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.978 [2024-12-09 16:00:51.159235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.978 [2024-12-09 16:00:51.159242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.978 [2024-12-09 16:00:51.159248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.978 [2024-12-09 16:00:51.159263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.978 qpair failed and we were unable to recover it. 00:27:55.978 [2024-12-09 16:00:51.169170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.978 [2024-12-09 16:00:51.169238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.978 [2024-12-09 16:00:51.169251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.978 [2024-12-09 16:00:51.169258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.978 [2024-12-09 16:00:51.169264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.978 [2024-12-09 16:00:51.169280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.978 qpair failed and we were unable to recover it. 00:27:55.978 [2024-12-09 16:00:51.179271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.978 [2024-12-09 16:00:51.179327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.978 [2024-12-09 16:00:51.179343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.978 [2024-12-09 16:00:51.179350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.978 [2024-12-09 16:00:51.179356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.978 [2024-12-09 16:00:51.179371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.978 qpair failed and we were unable to recover it. 00:27:55.978 [2024-12-09 16:00:51.189301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.978 [2024-12-09 16:00:51.189359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.978 [2024-12-09 16:00:51.189373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.978 [2024-12-09 16:00:51.189380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.978 [2024-12-09 16:00:51.189386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.978 [2024-12-09 16:00:51.189402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.978 qpair failed and we were unable to recover it. 00:27:55.978 [2024-12-09 16:00:51.199280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.978 [2024-12-09 16:00:51.199376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.978 [2024-12-09 16:00:51.199389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.978 [2024-12-09 16:00:51.199396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.978 [2024-12-09 16:00:51.199402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:55.978 [2024-12-09 16:00:51.199417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.978 qpair failed and we were unable to recover it. 00:27:56.238 [2024-12-09 16:00:51.209384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.238 [2024-12-09 16:00:51.209439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.238 [2024-12-09 16:00:51.209452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.238 [2024-12-09 16:00:51.209459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.238 [2024-12-09 16:00:51.209465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.238 [2024-12-09 16:00:51.209481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-12-09 16:00:51.219450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.238 [2024-12-09 16:00:51.219506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.238 [2024-12-09 16:00:51.219519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.238 [2024-12-09 16:00:51.219529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.238 [2024-12-09 16:00:51.219535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.238 [2024-12-09 16:00:51.219551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-12-09 16:00:51.229461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.238 [2024-12-09 16:00:51.229522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.238 [2024-12-09 16:00:51.229536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.238 [2024-12-09 16:00:51.229542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.238 [2024-12-09 16:00:51.229549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.238 [2024-12-09 16:00:51.229563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-12-09 16:00:51.239514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.238 [2024-12-09 16:00:51.239599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.238 [2024-12-09 16:00:51.239613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.238 [2024-12-09 16:00:51.239620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.238 [2024-12-09 16:00:51.239626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.238 [2024-12-09 16:00:51.239641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-12-09 16:00:51.249466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.238 [2024-12-09 16:00:51.249521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.238 [2024-12-09 16:00:51.249534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.238 [2024-12-09 16:00:51.249541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.238 [2024-12-09 16:00:51.249547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.238 [2024-12-09 16:00:51.249562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-12-09 16:00:51.259558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.238 [2024-12-09 16:00:51.259615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.238 [2024-12-09 16:00:51.259629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.238 [2024-12-09 16:00:51.259636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.238 [2024-12-09 16:00:51.259642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.238 [2024-12-09 16:00:51.259656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.238 [2024-12-09 16:00:51.269492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.238 [2024-12-09 16:00:51.269546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.238 [2024-12-09 16:00:51.269560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.238 [2024-12-09 16:00:51.269567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.238 [2024-12-09 16:00:51.269573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.238 [2024-12-09 16:00:51.269588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.238 qpair failed and we were unable to recover it. 00:27:56.239 [2024-12-09 16:00:51.279609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.239 [2024-12-09 16:00:51.279692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.239 [2024-12-09 16:00:51.279706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.239 [2024-12-09 16:00:51.279712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.239 [2024-12-09 16:00:51.279719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.239 [2024-12-09 16:00:51.279733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-12-09 16:00:51.289536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.239 [2024-12-09 16:00:51.289592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.239 [2024-12-09 16:00:51.289605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.239 [2024-12-09 16:00:51.289612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.239 [2024-12-09 16:00:51.289618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.239 [2024-12-09 16:00:51.289633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-12-09 16:00:51.299567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.239 [2024-12-09 16:00:51.299626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.239 [2024-12-09 16:00:51.299640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.239 [2024-12-09 16:00:51.299647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.239 [2024-12-09 16:00:51.299654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.239 [2024-12-09 16:00:51.299669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-12-09 16:00:51.309590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.239 [2024-12-09 16:00:51.309649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.239 [2024-12-09 16:00:51.309662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.239 [2024-12-09 16:00:51.309669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.239 [2024-12-09 16:00:51.309676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.239 [2024-12-09 16:00:51.309690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-12-09 16:00:51.319637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.239 [2024-12-09 16:00:51.319723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.239 [2024-12-09 16:00:51.319737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.239 [2024-12-09 16:00:51.319743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.239 [2024-12-09 16:00:51.319750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.239 [2024-12-09 16:00:51.319764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-12-09 16:00:51.329803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.239 [2024-12-09 16:00:51.329907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.239 [2024-12-09 16:00:51.329921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.239 [2024-12-09 16:00:51.329928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.239 [2024-12-09 16:00:51.329935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.239 [2024-12-09 16:00:51.329950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-12-09 16:00:51.339678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.239 [2024-12-09 16:00:51.339736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.239 [2024-12-09 16:00:51.339748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.239 [2024-12-09 16:00:51.339755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.239 [2024-12-09 16:00:51.339761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.239 [2024-12-09 16:00:51.339776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-12-09 16:00:51.349791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.239 [2024-12-09 16:00:51.349865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.239 [2024-12-09 16:00:51.349878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.239 [2024-12-09 16:00:51.349888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.239 [2024-12-09 16:00:51.349894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.239 [2024-12-09 16:00:51.349908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-12-09 16:00:51.359795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.239 [2024-12-09 16:00:51.359862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.239 [2024-12-09 16:00:51.359874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.239 [2024-12-09 16:00:51.359881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.239 [2024-12-09 16:00:51.359887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.239 [2024-12-09 16:00:51.359902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-12-09 16:00:51.369828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.239 [2024-12-09 16:00:51.369879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.239 [2024-12-09 16:00:51.369892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.239 [2024-12-09 16:00:51.369899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.239 [2024-12-09 16:00:51.369906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.239 [2024-12-09 16:00:51.369922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-12-09 16:00:51.379859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.239 [2024-12-09 16:00:51.379912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.239 [2024-12-09 16:00:51.379926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.239 [2024-12-09 16:00:51.379933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.239 [2024-12-09 16:00:51.379939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.239 [2024-12-09 16:00:51.379954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-12-09 16:00:51.389905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.239 [2024-12-09 16:00:51.389980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.239 [2024-12-09 16:00:51.389994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.239 [2024-12-09 16:00:51.390001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.239 [2024-12-09 16:00:51.390007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.239 [2024-12-09 16:00:51.390025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-12-09 16:00:51.399930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.239 [2024-12-09 16:00:51.400011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.239 [2024-12-09 16:00:51.400024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.239 [2024-12-09 16:00:51.400031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.239 [2024-12-09 16:00:51.400037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.239 [2024-12-09 16:00:51.400052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.239 qpair failed and we were unable to recover it. 00:27:56.239 [2024-12-09 16:00:51.409935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.239 [2024-12-09 16:00:51.409996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.240 [2024-12-09 16:00:51.410009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.240 [2024-12-09 16:00:51.410016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.240 [2024-12-09 16:00:51.410022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.240 [2024-12-09 16:00:51.410037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-12-09 16:00:51.419978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.240 [2024-12-09 16:00:51.420032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.240 [2024-12-09 16:00:51.420045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.240 [2024-12-09 16:00:51.420052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.240 [2024-12-09 16:00:51.420058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.240 [2024-12-09 16:00:51.420073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-12-09 16:00:51.430053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.240 [2024-12-09 16:00:51.430113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.240 [2024-12-09 16:00:51.430126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.240 [2024-12-09 16:00:51.430134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.240 [2024-12-09 16:00:51.430140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.240 [2024-12-09 16:00:51.430154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-12-09 16:00:51.440013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.240 [2024-12-09 16:00:51.440077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.240 [2024-12-09 16:00:51.440090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.240 [2024-12-09 16:00:51.440097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.240 [2024-12-09 16:00:51.440104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.240 [2024-12-09 16:00:51.440118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-12-09 16:00:51.450064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.240 [2024-12-09 16:00:51.450119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.240 [2024-12-09 16:00:51.450133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.240 [2024-12-09 16:00:51.450141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.240 [2024-12-09 16:00:51.450148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.240 [2024-12-09 16:00:51.450162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.240 [2024-12-09 16:00:51.460117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.240 [2024-12-09 16:00:51.460174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.240 [2024-12-09 16:00:51.460187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.240 [2024-12-09 16:00:51.460194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.240 [2024-12-09 16:00:51.460201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.240 [2024-12-09 16:00:51.460216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.240 qpair failed and we were unable to recover it. 00:27:56.500 [2024-12-09 16:00:51.470118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.500 [2024-12-09 16:00:51.470169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.500 [2024-12-09 16:00:51.470182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.500 [2024-12-09 16:00:51.470188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.500 [2024-12-09 16:00:51.470194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.500 [2024-12-09 16:00:51.470209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.500 qpair failed and we were unable to recover it. 00:27:56.500 [2024-12-09 16:00:51.480179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.500 [2024-12-09 16:00:51.480241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.500 [2024-12-09 16:00:51.480257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.500 [2024-12-09 16:00:51.480265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.500 [2024-12-09 16:00:51.480271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.500 [2024-12-09 16:00:51.480286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.500 qpair failed and we were unable to recover it. 00:27:56.500 [2024-12-09 16:00:51.490226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.500 [2024-12-09 16:00:51.490286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.500 [2024-12-09 16:00:51.490299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.500 [2024-12-09 16:00:51.490306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.500 [2024-12-09 16:00:51.490312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.500 [2024-12-09 16:00:51.490327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.500 qpair failed and we were unable to recover it. 00:27:56.500 [2024-12-09 16:00:51.500258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.500 [2024-12-09 16:00:51.500360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.500 [2024-12-09 16:00:51.500373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.500 [2024-12-09 16:00:51.500380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.500 [2024-12-09 16:00:51.500386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.500 [2024-12-09 16:00:51.500401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.500 qpair failed and we were unable to recover it. 00:27:56.500 [2024-12-09 16:00:51.510224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.500 [2024-12-09 16:00:51.510280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.500 [2024-12-09 16:00:51.510293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.500 [2024-12-09 16:00:51.510300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.500 [2024-12-09 16:00:51.510307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.500 [2024-12-09 16:00:51.510321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.500 qpair failed and we were unable to recover it. 00:27:56.500 [2024-12-09 16:00:51.520260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.500 [2024-12-09 16:00:51.520310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.500 [2024-12-09 16:00:51.520323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.500 [2024-12-09 16:00:51.520330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.500 [2024-12-09 16:00:51.520339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.500 [2024-12-09 16:00:51.520353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.500 qpair failed and we were unable to recover it. 00:27:56.500 [2024-12-09 16:00:51.530278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.500 [2024-12-09 16:00:51.530329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.500 [2024-12-09 16:00:51.530341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.500 [2024-12-09 16:00:51.530348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.500 [2024-12-09 16:00:51.530356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.500 [2024-12-09 16:00:51.530370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.500 qpair failed and we were unable to recover it. 00:27:56.500 [2024-12-09 16:00:51.540294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.500 [2024-12-09 16:00:51.540351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.500 [2024-12-09 16:00:51.540364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.500 [2024-12-09 16:00:51.540371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.500 [2024-12-09 16:00:51.540378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.501 [2024-12-09 16:00:51.540393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.501 qpair failed and we were unable to recover it. 00:27:56.501 [2024-12-09 16:00:51.550332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.501 [2024-12-09 16:00:51.550390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.501 [2024-12-09 16:00:51.550402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.501 [2024-12-09 16:00:51.550409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.501 [2024-12-09 16:00:51.550415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.501 [2024-12-09 16:00:51.550430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.501 qpair failed and we were unable to recover it. 00:27:56.501 [2024-12-09 16:00:51.560358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.501 [2024-12-09 16:00:51.560421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.501 [2024-12-09 16:00:51.560433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.501 [2024-12-09 16:00:51.560441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.501 [2024-12-09 16:00:51.560447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.501 [2024-12-09 16:00:51.560460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.501 qpair failed and we were unable to recover it. 00:27:56.501 [2024-12-09 16:00:51.570395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.501 [2024-12-09 16:00:51.570447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.501 [2024-12-09 16:00:51.570460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.501 [2024-12-09 16:00:51.570467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.501 [2024-12-09 16:00:51.570474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.501 [2024-12-09 16:00:51.570488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.501 qpair failed and we were unable to recover it. 00:27:56.501 [2024-12-09 16:00:51.580428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.501 [2024-12-09 16:00:51.580498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.501 [2024-12-09 16:00:51.580511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.501 [2024-12-09 16:00:51.580519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.501 [2024-12-09 16:00:51.580525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.501 [2024-12-09 16:00:51.580539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.501 qpair failed and we were unable to recover it. 00:27:56.501 [2024-12-09 16:00:51.590504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.501 [2024-12-09 16:00:51.590569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.501 [2024-12-09 16:00:51.590582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.501 [2024-12-09 16:00:51.590590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.501 [2024-12-09 16:00:51.590596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.501 [2024-12-09 16:00:51.590610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.501 qpair failed and we were unable to recover it. 00:27:56.501 [2024-12-09 16:00:51.600480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.501 [2024-12-09 16:00:51.600535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.501 [2024-12-09 16:00:51.600548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.501 [2024-12-09 16:00:51.600555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.501 [2024-12-09 16:00:51.600561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.501 [2024-12-09 16:00:51.600576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.501 qpair failed and we were unable to recover it. 00:27:56.501 [2024-12-09 16:00:51.610428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.501 [2024-12-09 16:00:51.610482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.501 [2024-12-09 16:00:51.610497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.501 [2024-12-09 16:00:51.610505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.501 [2024-12-09 16:00:51.610511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.501 [2024-12-09 16:00:51.610525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.501 qpair failed and we were unable to recover it. 00:27:56.501 [2024-12-09 16:00:51.620523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.501 [2024-12-09 16:00:51.620606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.501 [2024-12-09 16:00:51.620619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.501 [2024-12-09 16:00:51.620626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.501 [2024-12-09 16:00:51.620632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.501 [2024-12-09 16:00:51.620645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.501 qpair failed and we were unable to recover it. 00:27:56.501 [2024-12-09 16:00:51.630573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.501 [2024-12-09 16:00:51.630627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.501 [2024-12-09 16:00:51.630640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.501 [2024-12-09 16:00:51.630647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.501 [2024-12-09 16:00:51.630653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.501 [2024-12-09 16:00:51.630667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.501 qpair failed and we were unable to recover it. 00:27:56.501 [2024-12-09 16:00:51.640636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.501 [2024-12-09 16:00:51.640692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.501 [2024-12-09 16:00:51.640706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.501 [2024-12-09 16:00:51.640713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.501 [2024-12-09 16:00:51.640720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.501 [2024-12-09 16:00:51.640734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.501 qpair failed and we were unable to recover it. 00:27:56.501 [2024-12-09 16:00:51.650630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.501 [2024-12-09 16:00:51.650685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.501 [2024-12-09 16:00:51.650698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.501 [2024-12-09 16:00:51.650705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.501 [2024-12-09 16:00:51.650714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.501 [2024-12-09 16:00:51.650729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.501 qpair failed and we were unable to recover it. 00:27:56.501 [2024-12-09 16:00:51.660655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.501 [2024-12-09 16:00:51.660713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.501 [2024-12-09 16:00:51.660726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.501 [2024-12-09 16:00:51.660733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.501 [2024-12-09 16:00:51.660740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.501 [2024-12-09 16:00:51.660754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.501 qpair failed and we were unable to recover it. 00:27:56.501 [2024-12-09 16:00:51.670690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.501 [2024-12-09 16:00:51.670743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.501 [2024-12-09 16:00:51.670756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.501 [2024-12-09 16:00:51.670763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.501 [2024-12-09 16:00:51.670770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.501 [2024-12-09 16:00:51.670785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.501 qpair failed and we were unable to recover it. 00:27:56.501 [2024-12-09 16:00:51.680728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.502 [2024-12-09 16:00:51.680792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.502 [2024-12-09 16:00:51.680806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.502 [2024-12-09 16:00:51.680812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.502 [2024-12-09 16:00:51.680819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.502 [2024-12-09 16:00:51.680833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.502 qpair failed and we were unable to recover it. 00:27:56.502 [2024-12-09 16:00:51.690751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.502 [2024-12-09 16:00:51.690818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.502 [2024-12-09 16:00:51.690830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.502 [2024-12-09 16:00:51.690837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.502 [2024-12-09 16:00:51.690843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.502 [2024-12-09 16:00:51.690859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.502 qpair failed and we were unable to recover it. 00:27:56.502 [2024-12-09 16:00:51.700782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.502 [2024-12-09 16:00:51.700847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.502 [2024-12-09 16:00:51.700861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.502 [2024-12-09 16:00:51.700869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.502 [2024-12-09 16:00:51.700874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.502 [2024-12-09 16:00:51.700889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.502 qpair failed and we were unable to recover it. 00:27:56.502 [2024-12-09 16:00:51.710809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.502 [2024-12-09 16:00:51.710862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.502 [2024-12-09 16:00:51.710875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.502 [2024-12-09 16:00:51.710881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.502 [2024-12-09 16:00:51.710889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.502 [2024-12-09 16:00:51.710903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.502 qpair failed and we were unable to recover it. 00:27:56.502 [2024-12-09 16:00:51.720830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.502 [2024-12-09 16:00:51.720886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.502 [2024-12-09 16:00:51.720899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.502 [2024-12-09 16:00:51.720905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.502 [2024-12-09 16:00:51.720912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.502 [2024-12-09 16:00:51.720926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.502 qpair failed and we were unable to recover it. 00:27:56.762 [2024-12-09 16:00:51.730862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-12-09 16:00:51.730927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-12-09 16:00:51.730940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-12-09 16:00:51.730947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-12-09 16:00:51.730954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.762 [2024-12-09 16:00:51.730969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-12-09 16:00:51.740909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-12-09 16:00:51.740969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-12-09 16:00:51.740985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-12-09 16:00:51.740993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-12-09 16:00:51.740999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.762 [2024-12-09 16:00:51.741013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-12-09 16:00:51.750918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-12-09 16:00:51.750973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-12-09 16:00:51.750986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-12-09 16:00:51.750993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-12-09 16:00:51.750999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.762 [2024-12-09 16:00:51.751014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-12-09 16:00:51.760946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-12-09 16:00:51.761000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-12-09 16:00:51.761013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-12-09 16:00:51.761019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-12-09 16:00:51.761026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.762 [2024-12-09 16:00:51.761040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-12-09 16:00:51.771017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-12-09 16:00:51.771072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-12-09 16:00:51.771085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-12-09 16:00:51.771092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-12-09 16:00:51.771099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.762 [2024-12-09 16:00:51.771113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-12-09 16:00:51.780988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-12-09 16:00:51.781082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-12-09 16:00:51.781095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-12-09 16:00:51.781105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-12-09 16:00:51.781111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.762 [2024-12-09 16:00:51.781125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-12-09 16:00:51.791038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-12-09 16:00:51.791128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-12-09 16:00:51.791142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-12-09 16:00:51.791149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-12-09 16:00:51.791155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.762 [2024-12-09 16:00:51.791169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-12-09 16:00:51.801051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-12-09 16:00:51.801109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-12-09 16:00:51.801123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-12-09 16:00:51.801131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-12-09 16:00:51.801137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.762 [2024-12-09 16:00:51.801152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-12-09 16:00:51.811079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-12-09 16:00:51.811145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-12-09 16:00:51.811157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-12-09 16:00:51.811164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-12-09 16:00:51.811170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.762 [2024-12-09 16:00:51.811185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-12-09 16:00:51.821181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-12-09 16:00:51.821239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-12-09 16:00:51.821252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-12-09 16:00:51.821259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-12-09 16:00:51.821265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.762 [2024-12-09 16:00:51.821283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-12-09 16:00:51.831151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-12-09 16:00:51.831207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-12-09 16:00:51.831225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-12-09 16:00:51.831232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-12-09 16:00:51.831238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.762 [2024-12-09 16:00:51.831253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.763 [2024-12-09 16:00:51.841182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.763 [2024-12-09 16:00:51.841236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.763 [2024-12-09 16:00:51.841248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.763 [2024-12-09 16:00:51.841255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.763 [2024-12-09 16:00:51.841262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.763 [2024-12-09 16:00:51.841276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-12-09 16:00:51.851232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.763 [2024-12-09 16:00:51.851313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.763 [2024-12-09 16:00:51.851327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.763 [2024-12-09 16:00:51.851334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.763 [2024-12-09 16:00:51.851341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.763 [2024-12-09 16:00:51.851355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-12-09 16:00:51.861264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.763 [2024-12-09 16:00:51.861334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.763 [2024-12-09 16:00:51.861347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.763 [2024-12-09 16:00:51.861354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.763 [2024-12-09 16:00:51.861360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.763 [2024-12-09 16:00:51.861374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-12-09 16:00:51.871267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.763 [2024-12-09 16:00:51.871321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.763 [2024-12-09 16:00:51.871335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.763 [2024-12-09 16:00:51.871342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.763 [2024-12-09 16:00:51.871348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.763 [2024-12-09 16:00:51.871363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-12-09 16:00:51.881292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.763 [2024-12-09 16:00:51.881346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.763 [2024-12-09 16:00:51.881361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.763 [2024-12-09 16:00:51.881369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.763 [2024-12-09 16:00:51.881375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.763 [2024-12-09 16:00:51.881390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-12-09 16:00:51.891349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.763 [2024-12-09 16:00:51.891398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.763 [2024-12-09 16:00:51.891411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.763 [2024-12-09 16:00:51.891418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.763 [2024-12-09 16:00:51.891425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.763 [2024-12-09 16:00:51.891439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-12-09 16:00:51.901331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.763 [2024-12-09 16:00:51.901387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.763 [2024-12-09 16:00:51.901400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.763 [2024-12-09 16:00:51.901407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.763 [2024-12-09 16:00:51.901413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.763 [2024-12-09 16:00:51.901428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-12-09 16:00:51.911377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.763 [2024-12-09 16:00:51.911434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.763 [2024-12-09 16:00:51.911447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.763 [2024-12-09 16:00:51.911458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.763 [2024-12-09 16:00:51.911464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.763 [2024-12-09 16:00:51.911479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-12-09 16:00:51.921413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.763 [2024-12-09 16:00:51.921480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.763 [2024-12-09 16:00:51.921493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.763 [2024-12-09 16:00:51.921500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.763 [2024-12-09 16:00:51.921506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.763 [2024-12-09 16:00:51.921520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-12-09 16:00:51.931439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.763 [2024-12-09 16:00:51.931493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.763 [2024-12-09 16:00:51.931506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.763 [2024-12-09 16:00:51.931512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.763 [2024-12-09 16:00:51.931519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.763 [2024-12-09 16:00:51.931533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-12-09 16:00:51.941478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.763 [2024-12-09 16:00:51.941532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.763 [2024-12-09 16:00:51.941546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.763 [2024-12-09 16:00:51.941552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.763 [2024-12-09 16:00:51.941559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.763 [2024-12-09 16:00:51.941574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-12-09 16:00:51.951531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.763 [2024-12-09 16:00:51.951596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.763 [2024-12-09 16:00:51.951609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.763 [2024-12-09 16:00:51.951616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.763 [2024-12-09 16:00:51.951623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.763 [2024-12-09 16:00:51.951640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-12-09 16:00:51.961541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.763 [2024-12-09 16:00:51.961596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.763 [2024-12-09 16:00:51.961609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.763 [2024-12-09 16:00:51.961615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.763 [2024-12-09 16:00:51.961622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.763 [2024-12-09 16:00:51.961637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.763 qpair failed and we were unable to recover it. 00:27:56.763 [2024-12-09 16:00:51.971565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.763 [2024-12-09 16:00:51.971619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.764 [2024-12-09 16:00:51.971632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.764 [2024-12-09 16:00:51.971639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.764 [2024-12-09 16:00:51.971646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.764 [2024-12-09 16:00:51.971660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.764 qpair failed and we were unable to recover it. 00:27:56.764 [2024-12-09 16:00:51.981648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.764 [2024-12-09 16:00:51.981728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.764 [2024-12-09 16:00:51.981741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.764 [2024-12-09 16:00:51.981750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.764 [2024-12-09 16:00:51.981756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:56.764 [2024-12-09 16:00:51.981770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.764 qpair failed and we were unable to recover it. 00:27:57.023 [2024-12-09 16:00:51.991661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.023 [2024-12-09 16:00:51.991720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.023 [2024-12-09 16:00:51.991734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.023 [2024-12-09 16:00:51.991742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.023 [2024-12-09 16:00:51.991748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.023 [2024-12-09 16:00:51.991763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.023 qpair failed and we were unable to recover it. 00:27:57.023 [2024-12-09 16:00:52.001667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.023 [2024-12-09 16:00:52.001736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.023 [2024-12-09 16:00:52.001749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.023 [2024-12-09 16:00:52.001756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.023 [2024-12-09 16:00:52.001762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.023 [2024-12-09 16:00:52.001778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.023 qpair failed and we were unable to recover it. 00:27:57.023 [2024-12-09 16:00:52.011671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.023 [2024-12-09 16:00:52.011723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.023 [2024-12-09 16:00:52.011736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.023 [2024-12-09 16:00:52.011743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.023 [2024-12-09 16:00:52.011750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.023 [2024-12-09 16:00:52.011765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.023 qpair failed and we were unable to recover it. 00:27:57.023 [2024-12-09 16:00:52.021707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.023 [2024-12-09 16:00:52.021764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.023 [2024-12-09 16:00:52.021777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.023 [2024-12-09 16:00:52.021784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.023 [2024-12-09 16:00:52.021790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.023 [2024-12-09 16:00:52.021804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.023 qpair failed and we were unable to recover it. 00:27:57.023 [2024-12-09 16:00:52.031735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.023 [2024-12-09 16:00:52.031788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.023 [2024-12-09 16:00:52.031801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.023 [2024-12-09 16:00:52.031808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.023 [2024-12-09 16:00:52.031815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.023 [2024-12-09 16:00:52.031829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.023 qpair failed and we were unable to recover it. 00:27:57.023 [2024-12-09 16:00:52.041820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.023 [2024-12-09 16:00:52.041873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.023 [2024-12-09 16:00:52.041889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.023 [2024-12-09 16:00:52.041895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.023 [2024-12-09 16:00:52.041902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.023 [2024-12-09 16:00:52.041916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.023 qpair failed and we were unable to recover it. 00:27:57.023 [2024-12-09 16:00:52.051789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.023 [2024-12-09 16:00:52.051841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.023 [2024-12-09 16:00:52.051854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.023 [2024-12-09 16:00:52.051861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.023 [2024-12-09 16:00:52.051867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.023 [2024-12-09 16:00:52.051882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-12-09 16:00:52.061827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-12-09 16:00:52.061882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-12-09 16:00:52.061894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-12-09 16:00:52.061901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-12-09 16:00:52.061908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.024 [2024-12-09 16:00:52.061922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-12-09 16:00:52.071847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-12-09 16:00:52.071903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-12-09 16:00:52.071915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-12-09 16:00:52.071922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-12-09 16:00:52.071929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.024 [2024-12-09 16:00:52.071944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-12-09 16:00:52.081873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-12-09 16:00:52.081928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-12-09 16:00:52.081941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-12-09 16:00:52.081948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-12-09 16:00:52.081957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.024 [2024-12-09 16:00:52.081972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-12-09 16:00:52.091920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-12-09 16:00:52.091989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-12-09 16:00:52.092002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-12-09 16:00:52.092009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-12-09 16:00:52.092015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.024 [2024-12-09 16:00:52.092029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-12-09 16:00:52.101937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-12-09 16:00:52.101989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-12-09 16:00:52.102003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-12-09 16:00:52.102010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-12-09 16:00:52.102016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.024 [2024-12-09 16:00:52.102030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-12-09 16:00:52.111955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-12-09 16:00:52.112010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-12-09 16:00:52.112023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-12-09 16:00:52.112030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-12-09 16:00:52.112037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.024 [2024-12-09 16:00:52.112051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-12-09 16:00:52.121917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-12-09 16:00:52.121981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-12-09 16:00:52.121994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-12-09 16:00:52.122001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-12-09 16:00:52.122007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.024 [2024-12-09 16:00:52.122022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-12-09 16:00:52.132041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-12-09 16:00:52.132093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-12-09 16:00:52.132106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-12-09 16:00:52.132113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-12-09 16:00:52.132120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.024 [2024-12-09 16:00:52.132134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-12-09 16:00:52.141979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-12-09 16:00:52.142060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-12-09 16:00:52.142073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-12-09 16:00:52.142079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-12-09 16:00:52.142085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.024 [2024-12-09 16:00:52.142100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-12-09 16:00:52.152123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-12-09 16:00:52.152183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-12-09 16:00:52.152195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-12-09 16:00:52.152202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-12-09 16:00:52.152208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.024 [2024-12-09 16:00:52.152226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-12-09 16:00:52.162096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-12-09 16:00:52.162184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-12-09 16:00:52.162197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-12-09 16:00:52.162204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-12-09 16:00:52.162210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.024 [2024-12-09 16:00:52.162229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-12-09 16:00:52.172137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-12-09 16:00:52.172194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-12-09 16:00:52.172210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-12-09 16:00:52.172220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-12-09 16:00:52.172226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.024 [2024-12-09 16:00:52.172241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-12-09 16:00:52.182165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-12-09 16:00:52.182254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-12-09 16:00:52.182267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-12-09 16:00:52.182274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-12-09 16:00:52.182281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.024 [2024-12-09 16:00:52.182296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-12-09 16:00:52.192211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-12-09 16:00:52.192277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.025 [2024-12-09 16:00:52.192291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.025 [2024-12-09 16:00:52.192298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.025 [2024-12-09 16:00:52.192304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.025 [2024-12-09 16:00:52.192319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.025 qpair failed and we were unable to recover it. 00:27:57.025 [2024-12-09 16:00:52.202259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.025 [2024-12-09 16:00:52.202312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.025 [2024-12-09 16:00:52.202325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.025 [2024-12-09 16:00:52.202332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.025 [2024-12-09 16:00:52.202339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.025 [2024-12-09 16:00:52.202353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.025 qpair failed and we were unable to recover it. 00:27:57.025 [2024-12-09 16:00:52.212233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.025 [2024-12-09 16:00:52.212310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.025 [2024-12-09 16:00:52.212323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.025 [2024-12-09 16:00:52.212329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.025 [2024-12-09 16:00:52.212338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.025 [2024-12-09 16:00:52.212353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.025 qpair failed and we were unable to recover it. 00:27:57.025 [2024-12-09 16:00:52.222275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.025 [2024-12-09 16:00:52.222331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.025 [2024-12-09 16:00:52.222344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.025 [2024-12-09 16:00:52.222351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.025 [2024-12-09 16:00:52.222358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.025 [2024-12-09 16:00:52.222373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.025 qpair failed and we were unable to recover it. 00:27:57.025 [2024-12-09 16:00:52.232300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.025 [2024-12-09 16:00:52.232353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.025 [2024-12-09 16:00:52.232367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.025 [2024-12-09 16:00:52.232374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.025 [2024-12-09 16:00:52.232380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.025 [2024-12-09 16:00:52.232395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.025 qpair failed and we were unable to recover it. 00:27:57.025 [2024-12-09 16:00:52.242330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.025 [2024-12-09 16:00:52.242382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.025 [2024-12-09 16:00:52.242395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.025 [2024-12-09 16:00:52.242402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.025 [2024-12-09 16:00:52.242408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.025 [2024-12-09 16:00:52.242423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.025 qpair failed and we were unable to recover it. 00:27:57.284 [2024-12-09 16:00:52.252352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.284 [2024-12-09 16:00:52.252405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.284 [2024-12-09 16:00:52.252418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.284 [2024-12-09 16:00:52.252425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.284 [2024-12-09 16:00:52.252431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.284 [2024-12-09 16:00:52.252445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.284 qpair failed and we were unable to recover it. 00:27:57.284 [2024-12-09 16:00:52.262405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.284 [2024-12-09 16:00:52.262463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.284 [2024-12-09 16:00:52.262476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.284 [2024-12-09 16:00:52.262483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.284 [2024-12-09 16:00:52.262490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.284 [2024-12-09 16:00:52.262505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.284 qpair failed and we were unable to recover it. 00:27:57.285 [2024-12-09 16:00:52.272429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.285 [2024-12-09 16:00:52.272483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.285 [2024-12-09 16:00:52.272496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.285 [2024-12-09 16:00:52.272503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.285 [2024-12-09 16:00:52.272510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.285 [2024-12-09 16:00:52.272525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.285 qpair failed and we were unable to recover it. 00:27:57.285 [2024-12-09 16:00:52.282476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.285 [2024-12-09 16:00:52.282530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.285 [2024-12-09 16:00:52.282542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.285 [2024-12-09 16:00:52.282549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.285 [2024-12-09 16:00:52.282556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.285 [2024-12-09 16:00:52.282570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.285 qpair failed and we were unable to recover it. 00:27:57.285 [2024-12-09 16:00:52.292546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.285 [2024-12-09 16:00:52.292596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.285 [2024-12-09 16:00:52.292608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.285 [2024-12-09 16:00:52.292615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.285 [2024-12-09 16:00:52.292621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.285 [2024-12-09 16:00:52.292636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.285 qpair failed and we were unable to recover it. 00:27:57.285 [2024-12-09 16:00:52.302505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.285 [2024-12-09 16:00:52.302560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.285 [2024-12-09 16:00:52.302576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.285 [2024-12-09 16:00:52.302582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.285 [2024-12-09 16:00:52.302588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.285 [2024-12-09 16:00:52.302603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.285 qpair failed and we were unable to recover it. 00:27:57.285 [2024-12-09 16:00:52.312575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.285 [2024-12-09 16:00:52.312633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.285 [2024-12-09 16:00:52.312646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.285 [2024-12-09 16:00:52.312653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.285 [2024-12-09 16:00:52.312659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.285 [2024-12-09 16:00:52.312673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.285 qpair failed and we were unable to recover it. 00:27:57.285 [2024-12-09 16:00:52.322578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.285 [2024-12-09 16:00:52.322641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.285 [2024-12-09 16:00:52.322654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.285 [2024-12-09 16:00:52.322661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.285 [2024-12-09 16:00:52.322667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.285 [2024-12-09 16:00:52.322682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.285 qpair failed and we were unable to recover it. 00:27:57.285 [2024-12-09 16:00:52.332580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.285 [2024-12-09 16:00:52.332635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.285 [2024-12-09 16:00:52.332648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.285 [2024-12-09 16:00:52.332655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.285 [2024-12-09 16:00:52.332661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.285 [2024-12-09 16:00:52.332676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.285 qpair failed and we were unable to recover it. 00:27:57.285 [2024-12-09 16:00:52.342603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.285 [2024-12-09 16:00:52.342660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.285 [2024-12-09 16:00:52.342673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.285 [2024-12-09 16:00:52.342682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.285 [2024-12-09 16:00:52.342689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.285 [2024-12-09 16:00:52.342704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.285 qpair failed and we were unable to recover it. 00:27:57.285 [2024-12-09 16:00:52.352640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.285 [2024-12-09 16:00:52.352730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.285 [2024-12-09 16:00:52.352743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.285 [2024-12-09 16:00:52.352750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.285 [2024-12-09 16:00:52.352756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.285 [2024-12-09 16:00:52.352772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.285 qpair failed and we were unable to recover it. 00:27:57.285 [2024-12-09 16:00:52.362634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.285 [2024-12-09 16:00:52.362701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.285 [2024-12-09 16:00:52.362717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.285 [2024-12-09 16:00:52.362725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.285 [2024-12-09 16:00:52.362734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.285 [2024-12-09 16:00:52.362751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.285 qpair failed and we were unable to recover it. 00:27:57.285 [2024-12-09 16:00:52.372702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.285 [2024-12-09 16:00:52.372752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.285 [2024-12-09 16:00:52.372766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.285 [2024-12-09 16:00:52.372773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.285 [2024-12-09 16:00:52.372780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.285 [2024-12-09 16:00:52.372794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.285 qpair failed and we were unable to recover it. 00:27:57.285 [2024-12-09 16:00:52.382723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.285 [2024-12-09 16:00:52.382783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.285 [2024-12-09 16:00:52.382808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.285 [2024-12-09 16:00:52.382816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.285 [2024-12-09 16:00:52.382822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.285 [2024-12-09 16:00:52.382846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.285 qpair failed and we were unable to recover it. 00:27:57.285 [2024-12-09 16:00:52.392681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.285 [2024-12-09 16:00:52.392742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.285 [2024-12-09 16:00:52.392757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.285 [2024-12-09 16:00:52.392764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.285 [2024-12-09 16:00:52.392771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.286 [2024-12-09 16:00:52.392786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.286 qpair failed and we were unable to recover it. 00:27:57.286 [2024-12-09 16:00:52.402788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.286 [2024-12-09 16:00:52.402841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.286 [2024-12-09 16:00:52.402854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.286 [2024-12-09 16:00:52.402861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.286 [2024-12-09 16:00:52.402868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.286 [2024-12-09 16:00:52.402883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.286 qpair failed and we were unable to recover it. 00:27:57.286 [2024-12-09 16:00:52.412724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.286 [2024-12-09 16:00:52.412777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.286 [2024-12-09 16:00:52.412790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.286 [2024-12-09 16:00:52.412797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.286 [2024-12-09 16:00:52.412803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.286 [2024-12-09 16:00:52.412819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.286 qpair failed and we were unable to recover it. 00:27:57.286 [2024-12-09 16:00:52.422771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.286 [2024-12-09 16:00:52.422829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.286 [2024-12-09 16:00:52.422841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.286 [2024-12-09 16:00:52.422848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.286 [2024-12-09 16:00:52.422854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.286 [2024-12-09 16:00:52.422869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.286 qpair failed and we were unable to recover it. 00:27:57.286 [2024-12-09 16:00:52.432889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.286 [2024-12-09 16:00:52.432948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.286 [2024-12-09 16:00:52.432961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.286 [2024-12-09 16:00:52.432967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.286 [2024-12-09 16:00:52.432974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.286 [2024-12-09 16:00:52.432988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.286 qpair failed and we were unable to recover it. 00:27:57.286 [2024-12-09 16:00:52.442942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.286 [2024-12-09 16:00:52.442995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.286 [2024-12-09 16:00:52.443008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.286 [2024-12-09 16:00:52.443015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.286 [2024-12-09 16:00:52.443021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.286 [2024-12-09 16:00:52.443036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.286 qpair failed and we were unable to recover it. 00:27:57.286 [2024-12-09 16:00:52.452991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.286 [2024-12-09 16:00:52.453045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.286 [2024-12-09 16:00:52.453058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.286 [2024-12-09 16:00:52.453066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.286 [2024-12-09 16:00:52.453072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.286 [2024-12-09 16:00:52.453086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.286 qpair failed and we were unable to recover it. 00:27:57.286 [2024-12-09 16:00:52.462999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.286 [2024-12-09 16:00:52.463055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.286 [2024-12-09 16:00:52.463068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.286 [2024-12-09 16:00:52.463075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.286 [2024-12-09 16:00:52.463082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.286 [2024-12-09 16:00:52.463097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.286 qpair failed and we were unable to recover it. 00:27:57.286 [2024-12-09 16:00:52.472989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.286 [2024-12-09 16:00:52.473045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.286 [2024-12-09 16:00:52.473059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.286 [2024-12-09 16:00:52.473069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.286 [2024-12-09 16:00:52.473075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.286 [2024-12-09 16:00:52.473090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.286 qpair failed and we were unable to recover it. 00:27:57.286 [2024-12-09 16:00:52.482942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.286 [2024-12-09 16:00:52.483002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.286 [2024-12-09 16:00:52.483015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.286 [2024-12-09 16:00:52.483022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.286 [2024-12-09 16:00:52.483029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.286 [2024-12-09 16:00:52.483044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.286 qpair failed and we were unable to recover it. 00:27:57.286 [2024-12-09 16:00:52.493012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.286 [2024-12-09 16:00:52.493064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.286 [2024-12-09 16:00:52.493078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.286 [2024-12-09 16:00:52.493085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.286 [2024-12-09 16:00:52.493091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.286 [2024-12-09 16:00:52.493106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.286 qpair failed and we were unable to recover it. 00:27:57.286 [2024-12-09 16:00:52.503003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.286 [2024-12-09 16:00:52.503058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.286 [2024-12-09 16:00:52.503071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.286 [2024-12-09 16:00:52.503078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.286 [2024-12-09 16:00:52.503085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.286 [2024-12-09 16:00:52.503100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.286 qpair failed and we were unable to recover it. 00:27:57.546 [2024-12-09 16:00:52.513076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.547 [2024-12-09 16:00:52.513134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.547 [2024-12-09 16:00:52.513147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.547 [2024-12-09 16:00:52.513154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.547 [2024-12-09 16:00:52.513160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.547 [2024-12-09 16:00:52.513177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.547 qpair failed and we were unable to recover it. 00:27:57.547 [2024-12-09 16:00:52.523137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.547 [2024-12-09 16:00:52.523190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.547 [2024-12-09 16:00:52.523203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.547 [2024-12-09 16:00:52.523210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.547 [2024-12-09 16:00:52.523221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.547 [2024-12-09 16:00:52.523236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.547 qpair failed and we were unable to recover it. 00:27:57.547 [2024-12-09 16:00:52.533139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.547 [2024-12-09 16:00:52.533188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.547 [2024-12-09 16:00:52.533201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.547 [2024-12-09 16:00:52.533208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.547 [2024-12-09 16:00:52.533214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.547 [2024-12-09 16:00:52.533233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.547 qpair failed and we were unable to recover it. 00:27:57.547 [2024-12-09 16:00:52.543179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.547 [2024-12-09 16:00:52.543255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.547 [2024-12-09 16:00:52.543268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.547 [2024-12-09 16:00:52.543275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.547 [2024-12-09 16:00:52.543281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.547 [2024-12-09 16:00:52.543296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.547 qpair failed and we were unable to recover it. 00:27:57.547 [2024-12-09 16:00:52.553201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.547 [2024-12-09 16:00:52.553261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.547 [2024-12-09 16:00:52.553274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.547 [2024-12-09 16:00:52.553281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.547 [2024-12-09 16:00:52.553288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.547 [2024-12-09 16:00:52.553303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.547 qpair failed and we were unable to recover it. 00:27:57.547 [2024-12-09 16:00:52.563327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.547 [2024-12-09 16:00:52.563388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.547 [2024-12-09 16:00:52.563401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.547 [2024-12-09 16:00:52.563408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.547 [2024-12-09 16:00:52.563415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.547 [2024-12-09 16:00:52.563430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.547 qpair failed and we were unable to recover it. 00:27:57.547 [2024-12-09 16:00:52.573265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.547 [2024-12-09 16:00:52.573320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.547 [2024-12-09 16:00:52.573334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.547 [2024-12-09 16:00:52.573341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.547 [2024-12-09 16:00:52.573347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.547 [2024-12-09 16:00:52.573361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.547 qpair failed and we were unable to recover it. 00:27:57.547 [2024-12-09 16:00:52.583294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.547 [2024-12-09 16:00:52.583349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.547 [2024-12-09 16:00:52.583363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.547 [2024-12-09 16:00:52.583369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.547 [2024-12-09 16:00:52.583375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.547 [2024-12-09 16:00:52.583390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.547 qpair failed and we were unable to recover it. 00:27:57.547 [2024-12-09 16:00:52.593308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.547 [2024-12-09 16:00:52.593362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.547 [2024-12-09 16:00:52.593374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.547 [2024-12-09 16:00:52.593381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.547 [2024-12-09 16:00:52.593388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.547 [2024-12-09 16:00:52.593403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.547 qpair failed and we were unable to recover it. 00:27:57.547 [2024-12-09 16:00:52.603365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.547 [2024-12-09 16:00:52.603421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.547 [2024-12-09 16:00:52.603437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.547 [2024-12-09 16:00:52.603443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.547 [2024-12-09 16:00:52.603450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.547 [2024-12-09 16:00:52.603464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.547 qpair failed and we were unable to recover it. 00:27:57.547 [2024-12-09 16:00:52.613356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.547 [2024-12-09 16:00:52.613412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.547 [2024-12-09 16:00:52.613424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.547 [2024-12-09 16:00:52.613431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.547 [2024-12-09 16:00:52.613438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.547 [2024-12-09 16:00:52.613452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.547 qpair failed and we were unable to recover it. 00:27:57.547 [2024-12-09 16:00:52.623359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.547 [2024-12-09 16:00:52.623415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.547 [2024-12-09 16:00:52.623428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.547 [2024-12-09 16:00:52.623435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.547 [2024-12-09 16:00:52.623442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.547 [2024-12-09 16:00:52.623458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.547 qpair failed and we were unable to recover it. 00:27:57.547 [2024-12-09 16:00:52.633371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.547 [2024-12-09 16:00:52.633432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.547 [2024-12-09 16:00:52.633445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.547 [2024-12-09 16:00:52.633451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.547 [2024-12-09 16:00:52.633458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.547 [2024-12-09 16:00:52.633472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.547 qpair failed and we were unable to recover it. 00:27:57.548 [2024-12-09 16:00:52.643439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.548 [2024-12-09 16:00:52.643508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.548 [2024-12-09 16:00:52.643522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.548 [2024-12-09 16:00:52.643530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.548 [2024-12-09 16:00:52.643538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.548 [2024-12-09 16:00:52.643553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.548 qpair failed and we were unable to recover it. 00:27:57.548 [2024-12-09 16:00:52.653457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.548 [2024-12-09 16:00:52.653535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.548 [2024-12-09 16:00:52.653548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.548 [2024-12-09 16:00:52.653555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.548 [2024-12-09 16:00:52.653561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.548 [2024-12-09 16:00:52.653576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.548 qpair failed and we were unable to recover it. 00:27:57.548 [2024-12-09 16:00:52.663467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.548 [2024-12-09 16:00:52.663566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.548 [2024-12-09 16:00:52.663579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.548 [2024-12-09 16:00:52.663586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.548 [2024-12-09 16:00:52.663593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.548 [2024-12-09 16:00:52.663606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.548 qpair failed and we were unable to recover it. 00:27:57.548 [2024-12-09 16:00:52.673516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.548 [2024-12-09 16:00:52.673570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.548 [2024-12-09 16:00:52.673583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.548 [2024-12-09 16:00:52.673590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.548 [2024-12-09 16:00:52.673596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.548 [2024-12-09 16:00:52.673611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.548 qpair failed and we were unable to recover it. 00:27:57.548 [2024-12-09 16:00:52.683570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.548 [2024-12-09 16:00:52.683625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.548 [2024-12-09 16:00:52.683638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.548 [2024-12-09 16:00:52.683645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.548 [2024-12-09 16:00:52.683652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.548 [2024-12-09 16:00:52.683666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.548 qpair failed and we were unable to recover it. 00:27:57.548 [2024-12-09 16:00:52.693539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.548 [2024-12-09 16:00:52.693591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.548 [2024-12-09 16:00:52.693603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.548 [2024-12-09 16:00:52.693610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.548 [2024-12-09 16:00:52.693616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.548 [2024-12-09 16:00:52.693632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.548 qpair failed and we were unable to recover it. 00:27:57.548 [2024-12-09 16:00:52.703639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.548 [2024-12-09 16:00:52.703695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.548 [2024-12-09 16:00:52.703708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.548 [2024-12-09 16:00:52.703715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.548 [2024-12-09 16:00:52.703722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.548 [2024-12-09 16:00:52.703736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.548 qpair failed and we were unable to recover it. 00:27:57.548 [2024-12-09 16:00:52.713668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.548 [2024-12-09 16:00:52.713719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.548 [2024-12-09 16:00:52.713732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.548 [2024-12-09 16:00:52.713739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.548 [2024-12-09 16:00:52.713745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.548 [2024-12-09 16:00:52.713760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.548 qpair failed and we were unable to recover it. 00:27:57.548 [2024-12-09 16:00:52.723634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.548 [2024-12-09 16:00:52.723692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.548 [2024-12-09 16:00:52.723705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.548 [2024-12-09 16:00:52.723712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.548 [2024-12-09 16:00:52.723718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.548 [2024-12-09 16:00:52.723733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.548 qpair failed and we were unable to recover it. 00:27:57.548 [2024-12-09 16:00:52.733772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.548 [2024-12-09 16:00:52.733832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.548 [2024-12-09 16:00:52.733847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.548 [2024-12-09 16:00:52.733854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.548 [2024-12-09 16:00:52.733861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.548 [2024-12-09 16:00:52.733875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.548 qpair failed and we were unable to recover it. 00:27:57.548 [2024-12-09 16:00:52.743701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.548 [2024-12-09 16:00:52.743757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.548 [2024-12-09 16:00:52.743769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.548 [2024-12-09 16:00:52.743776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.548 [2024-12-09 16:00:52.743783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.548 [2024-12-09 16:00:52.743798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.548 qpair failed and we were unable to recover it. 00:27:57.548 [2024-12-09 16:00:52.753723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.548 [2024-12-09 16:00:52.753778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.548 [2024-12-09 16:00:52.753791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.548 [2024-12-09 16:00:52.753798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.548 [2024-12-09 16:00:52.753805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.548 [2024-12-09 16:00:52.753819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.548 qpair failed and we were unable to recover it. 00:27:57.548 [2024-12-09 16:00:52.763745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.548 [2024-12-09 16:00:52.763800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.548 [2024-12-09 16:00:52.763813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.548 [2024-12-09 16:00:52.763820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.548 [2024-12-09 16:00:52.763827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.548 [2024-12-09 16:00:52.763841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.548 qpair failed and we were unable to recover it. 00:27:57.809 [2024-12-09 16:00:52.773756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.809 [2024-12-09 16:00:52.773812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.809 [2024-12-09 16:00:52.773825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.809 [2024-12-09 16:00:52.773832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.809 [2024-12-09 16:00:52.773841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.809 [2024-12-09 16:00:52.773855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.809 qpair failed and we were unable to recover it. 00:27:57.809 [2024-12-09 16:00:52.783875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.809 [2024-12-09 16:00:52.783939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.809 [2024-12-09 16:00:52.783953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.809 [2024-12-09 16:00:52.783960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.809 [2024-12-09 16:00:52.783966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.809 [2024-12-09 16:00:52.783980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.809 qpair failed and we were unable to recover it. 00:27:57.809 [2024-12-09 16:00:52.793888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.809 [2024-12-09 16:00:52.793945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.809 [2024-12-09 16:00:52.793958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.809 [2024-12-09 16:00:52.793965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.809 [2024-12-09 16:00:52.793971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.809 [2024-12-09 16:00:52.793986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.809 qpair failed and we were unable to recover it. 00:27:57.809 [2024-12-09 16:00:52.803923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.809 [2024-12-09 16:00:52.804000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.809 [2024-12-09 16:00:52.804013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.809 [2024-12-09 16:00:52.804020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.809 [2024-12-09 16:00:52.804026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.809 [2024-12-09 16:00:52.804041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.809 qpair failed and we were unable to recover it. 00:27:57.809 [2024-12-09 16:00:52.813942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.809 [2024-12-09 16:00:52.813998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.809 [2024-12-09 16:00:52.814011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.809 [2024-12-09 16:00:52.814017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.809 [2024-12-09 16:00:52.814024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.809 [2024-12-09 16:00:52.814039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.809 qpair failed and we were unable to recover it. 00:27:57.809 [2024-12-09 16:00:52.823979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.809 [2024-12-09 16:00:52.824036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.809 [2024-12-09 16:00:52.824049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.809 [2024-12-09 16:00:52.824056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.809 [2024-12-09 16:00:52.824062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.809 [2024-12-09 16:00:52.824077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.809 qpair failed and we were unable to recover it. 00:27:57.809 [2024-12-09 16:00:52.834040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.809 [2024-12-09 16:00:52.834093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.809 [2024-12-09 16:00:52.834106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.809 [2024-12-09 16:00:52.834112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.809 [2024-12-09 16:00:52.834119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.809 [2024-12-09 16:00:52.834134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.809 qpair failed and we were unable to recover it. 00:27:57.809 [2024-12-09 16:00:52.844026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.809 [2024-12-09 16:00:52.844076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.809 [2024-12-09 16:00:52.844089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.809 [2024-12-09 16:00:52.844096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.809 [2024-12-09 16:00:52.844102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.809 [2024-12-09 16:00:52.844117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.809 qpair failed and we were unable to recover it. 00:27:57.809 [2024-12-09 16:00:52.854048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.809 [2024-12-09 16:00:52.854102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.809 [2024-12-09 16:00:52.854115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.809 [2024-12-09 16:00:52.854122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.809 [2024-12-09 16:00:52.854129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.809 [2024-12-09 16:00:52.854143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.809 qpair failed and we were unable to recover it. 00:27:57.809 [2024-12-09 16:00:52.864142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.809 [2024-12-09 16:00:52.864197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.809 [2024-12-09 16:00:52.864215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.809 [2024-12-09 16:00:52.864226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.809 [2024-12-09 16:00:52.864232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.809 [2024-12-09 16:00:52.864247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.809 qpair failed and we were unable to recover it. 00:27:57.809 [2024-12-09 16:00:52.874112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.809 [2024-12-09 16:00:52.874174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.809 [2024-12-09 16:00:52.874186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.809 [2024-12-09 16:00:52.874193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.809 [2024-12-09 16:00:52.874200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.809 [2024-12-09 16:00:52.874215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.809 qpair failed and we were unable to recover it. 00:27:57.809 [2024-12-09 16:00:52.884162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.810 [2024-12-09 16:00:52.884244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.810 [2024-12-09 16:00:52.884257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.810 [2024-12-09 16:00:52.884264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.810 [2024-12-09 16:00:52.884270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.810 [2024-12-09 16:00:52.884285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-12-09 16:00:52.894168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.810 [2024-12-09 16:00:52.894229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.810 [2024-12-09 16:00:52.894243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.810 [2024-12-09 16:00:52.894250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.810 [2024-12-09 16:00:52.894256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.810 [2024-12-09 16:00:52.894271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-12-09 16:00:52.904212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.810 [2024-12-09 16:00:52.904299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.810 [2024-12-09 16:00:52.904312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.810 [2024-12-09 16:00:52.904322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.810 [2024-12-09 16:00:52.904328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.810 [2024-12-09 16:00:52.904343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-12-09 16:00:52.914257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.810 [2024-12-09 16:00:52.914367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.810 [2024-12-09 16:00:52.914381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.810 [2024-12-09 16:00:52.914388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.810 [2024-12-09 16:00:52.914394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.810 [2024-12-09 16:00:52.914409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-12-09 16:00:52.924255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.810 [2024-12-09 16:00:52.924335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.810 [2024-12-09 16:00:52.924348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.810 [2024-12-09 16:00:52.924355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.810 [2024-12-09 16:00:52.924361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.810 [2024-12-09 16:00:52.924376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-12-09 16:00:52.934285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.810 [2024-12-09 16:00:52.934357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.810 [2024-12-09 16:00:52.934371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.810 [2024-12-09 16:00:52.934378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.810 [2024-12-09 16:00:52.934384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.810 [2024-12-09 16:00:52.934398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-12-09 16:00:52.944376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.810 [2024-12-09 16:00:52.944478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.810 [2024-12-09 16:00:52.944492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.810 [2024-12-09 16:00:52.944499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.810 [2024-12-09 16:00:52.944505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.810 [2024-12-09 16:00:52.944522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-12-09 16:00:52.954393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.810 [2024-12-09 16:00:52.954449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.810 [2024-12-09 16:00:52.954462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.810 [2024-12-09 16:00:52.954469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.810 [2024-12-09 16:00:52.954475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.810 [2024-12-09 16:00:52.954490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-12-09 16:00:52.964399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.810 [2024-12-09 16:00:52.964453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.810 [2024-12-09 16:00:52.964466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.810 [2024-12-09 16:00:52.964472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.810 [2024-12-09 16:00:52.964479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.810 [2024-12-09 16:00:52.964493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-12-09 16:00:52.974403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.810 [2024-12-09 16:00:52.974458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.810 [2024-12-09 16:00:52.974471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.810 [2024-12-09 16:00:52.974478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.810 [2024-12-09 16:00:52.974484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.810 [2024-12-09 16:00:52.974499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-12-09 16:00:52.984474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.810 [2024-12-09 16:00:52.984537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.810 [2024-12-09 16:00:52.984551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.810 [2024-12-09 16:00:52.984558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.810 [2024-12-09 16:00:52.984564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.810 [2024-12-09 16:00:52.984579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-12-09 16:00:52.994459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.810 [2024-12-09 16:00:52.994517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.810 [2024-12-09 16:00:52.994530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.810 [2024-12-09 16:00:52.994537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.810 [2024-12-09 16:00:52.994543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.810 [2024-12-09 16:00:52.994558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-12-09 16:00:53.004507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.810 [2024-12-09 16:00:53.004561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.810 [2024-12-09 16:00:53.004573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.810 [2024-12-09 16:00:53.004580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.810 [2024-12-09 16:00:53.004586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.810 [2024-12-09 16:00:53.004600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.810 qpair failed and we were unable to recover it. 00:27:57.810 [2024-12-09 16:00:53.014514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.810 [2024-12-09 16:00:53.014567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.810 [2024-12-09 16:00:53.014580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.810 [2024-12-09 16:00:53.014586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.811 [2024-12-09 16:00:53.014593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.811 [2024-12-09 16:00:53.014607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.811 qpair failed and we were unable to recover it. 00:27:57.811 [2024-12-09 16:00:53.024552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.811 [2024-12-09 16:00:53.024608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.811 [2024-12-09 16:00:53.024621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.811 [2024-12-09 16:00:53.024628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.811 [2024-12-09 16:00:53.024634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.811 [2024-12-09 16:00:53.024648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.811 qpair failed and we were unable to recover it. 00:27:57.811 [2024-12-09 16:00:53.034633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.811 [2024-12-09 16:00:53.034734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.811 [2024-12-09 16:00:53.034747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.811 [2024-12-09 16:00:53.034757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.811 [2024-12-09 16:00:53.034763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:57.811 [2024-12-09 16:00:53.034777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.811 qpair failed and we were unable to recover it. 00:27:58.071 [2024-12-09 16:00:53.044617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.071 [2024-12-09 16:00:53.044702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.071 [2024-12-09 16:00:53.044715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.071 [2024-12-09 16:00:53.044722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.071 [2024-12-09 16:00:53.044728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.071 [2024-12-09 16:00:53.044743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-12-09 16:00:53.054627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.071 [2024-12-09 16:00:53.054704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.071 [2024-12-09 16:00:53.054718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.071 [2024-12-09 16:00:53.054725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.071 [2024-12-09 16:00:53.054731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.071 [2024-12-09 16:00:53.054745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-12-09 16:00:53.064707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.071 [2024-12-09 16:00:53.064778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.071 [2024-12-09 16:00:53.064792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.071 [2024-12-09 16:00:53.064799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.071 [2024-12-09 16:00:53.064805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.071 [2024-12-09 16:00:53.064819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-12-09 16:00:53.074699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.071 [2024-12-09 16:00:53.074754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.071 [2024-12-09 16:00:53.074767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.071 [2024-12-09 16:00:53.074773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.071 [2024-12-09 16:00:53.074781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.071 [2024-12-09 16:00:53.074798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-12-09 16:00:53.084767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.071 [2024-12-09 16:00:53.084825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.071 [2024-12-09 16:00:53.084838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.071 [2024-12-09 16:00:53.084845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.071 [2024-12-09 16:00:53.084852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.071 [2024-12-09 16:00:53.084867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-12-09 16:00:53.094756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.071 [2024-12-09 16:00:53.094862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.071 [2024-12-09 16:00:53.094876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.071 [2024-12-09 16:00:53.094883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.071 [2024-12-09 16:00:53.094889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.071 [2024-12-09 16:00:53.094904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-12-09 16:00:53.104729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.071 [2024-12-09 16:00:53.104788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.071 [2024-12-09 16:00:53.104801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.071 [2024-12-09 16:00:53.104807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.071 [2024-12-09 16:00:53.104814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.071 [2024-12-09 16:00:53.104828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-12-09 16:00:53.114824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.071 [2024-12-09 16:00:53.114882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.071 [2024-12-09 16:00:53.114896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.071 [2024-12-09 16:00:53.114903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.071 [2024-12-09 16:00:53.114910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.071 [2024-12-09 16:00:53.114924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.071 qpair failed and we were unable to recover it. 00:27:58.071 [2024-12-09 16:00:53.124832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.071 [2024-12-09 16:00:53.124882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.071 [2024-12-09 16:00:53.124896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.071 [2024-12-09 16:00:53.124902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.071 [2024-12-09 16:00:53.124908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.072 [2024-12-09 16:00:53.124923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-12-09 16:00:53.134911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.072 [2024-12-09 16:00:53.134970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.072 [2024-12-09 16:00:53.134983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.072 [2024-12-09 16:00:53.134990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.072 [2024-12-09 16:00:53.134997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.072 [2024-12-09 16:00:53.135011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-12-09 16:00:53.144948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.072 [2024-12-09 16:00:53.145047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.072 [2024-12-09 16:00:53.145060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.072 [2024-12-09 16:00:53.145067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.072 [2024-12-09 16:00:53.145073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.072 [2024-12-09 16:00:53.145087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-12-09 16:00:53.154962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.072 [2024-12-09 16:00:53.155030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.072 [2024-12-09 16:00:53.155044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.072 [2024-12-09 16:00:53.155051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.072 [2024-12-09 16:00:53.155057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.072 [2024-12-09 16:00:53.155073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-12-09 16:00:53.164936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.072 [2024-12-09 16:00:53.165036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.072 [2024-12-09 16:00:53.165052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.072 [2024-12-09 16:00:53.165060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.072 [2024-12-09 16:00:53.165066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.072 [2024-12-09 16:00:53.165080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-12-09 16:00:53.175025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.072 [2024-12-09 16:00:53.175087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.072 [2024-12-09 16:00:53.175101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.072 [2024-12-09 16:00:53.175108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.072 [2024-12-09 16:00:53.175114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.072 [2024-12-09 16:00:53.175129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-12-09 16:00:53.185005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.072 [2024-12-09 16:00:53.185078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.072 [2024-12-09 16:00:53.185091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.072 [2024-12-09 16:00:53.185099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.072 [2024-12-09 16:00:53.185105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.072 [2024-12-09 16:00:53.185119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-12-09 16:00:53.194975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.072 [2024-12-09 16:00:53.195030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.072 [2024-12-09 16:00:53.195044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.072 [2024-12-09 16:00:53.195051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.072 [2024-12-09 16:00:53.195058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.072 [2024-12-09 16:00:53.195073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-12-09 16:00:53.205063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.072 [2024-12-09 16:00:53.205154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.072 [2024-12-09 16:00:53.205167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.072 [2024-12-09 16:00:53.205174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.072 [2024-12-09 16:00:53.205183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.072 [2024-12-09 16:00:53.205198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-12-09 16:00:53.215048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.072 [2024-12-09 16:00:53.215136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.072 [2024-12-09 16:00:53.215150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.072 [2024-12-09 16:00:53.215157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.072 [2024-12-09 16:00:53.215163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.072 [2024-12-09 16:00:53.215178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-12-09 16:00:53.225128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.072 [2024-12-09 16:00:53.225185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.072 [2024-12-09 16:00:53.225198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.072 [2024-12-09 16:00:53.225205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.072 [2024-12-09 16:00:53.225211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.072 [2024-12-09 16:00:53.225230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-12-09 16:00:53.235240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.072 [2024-12-09 16:00:53.235311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.072 [2024-12-09 16:00:53.235324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.072 [2024-12-09 16:00:53.235331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.072 [2024-12-09 16:00:53.235338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.072 [2024-12-09 16:00:53.235353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-12-09 16:00:53.245282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.072 [2024-12-09 16:00:53.245382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.072 [2024-12-09 16:00:53.245395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.072 [2024-12-09 16:00:53.245402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.072 [2024-12-09 16:00:53.245408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.072 [2024-12-09 16:00:53.245423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.072 qpair failed and we were unable to recover it. 00:27:58.072 [2024-12-09 16:00:53.255286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.072 [2024-12-09 16:00:53.255347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.073 [2024-12-09 16:00:53.255360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.073 [2024-12-09 16:00:53.255367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.073 [2024-12-09 16:00:53.255374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.073 [2024-12-09 16:00:53.255388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-12-09 16:00:53.265328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.073 [2024-12-09 16:00:53.265386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.073 [2024-12-09 16:00:53.265399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.073 [2024-12-09 16:00:53.265406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.073 [2024-12-09 16:00:53.265412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.073 [2024-12-09 16:00:53.265428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-12-09 16:00:53.275277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.073 [2024-12-09 16:00:53.275335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.073 [2024-12-09 16:00:53.275348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.073 [2024-12-09 16:00:53.275356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.073 [2024-12-09 16:00:53.275362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.073 [2024-12-09 16:00:53.275377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-12-09 16:00:53.285338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.073 [2024-12-09 16:00:53.285395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.073 [2024-12-09 16:00:53.285408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.073 [2024-12-09 16:00:53.285415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.073 [2024-12-09 16:00:53.285421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.073 [2024-12-09 16:00:53.285436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.073 [2024-12-09 16:00:53.295361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.073 [2024-12-09 16:00:53.295465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.073 [2024-12-09 16:00:53.295481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.073 [2024-12-09 16:00:53.295488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.073 [2024-12-09 16:00:53.295494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.073 [2024-12-09 16:00:53.295508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.073 qpair failed and we were unable to recover it. 00:27:58.333 [2024-12-09 16:00:53.305361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.333 [2024-12-09 16:00:53.305417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.333 [2024-12-09 16:00:53.305430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.333 [2024-12-09 16:00:53.305436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.333 [2024-12-09 16:00:53.305442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.333 [2024-12-09 16:00:53.305457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.333 qpair failed and we were unable to recover it. 00:27:58.333 [2024-12-09 16:00:53.315328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.333 [2024-12-09 16:00:53.315384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.333 [2024-12-09 16:00:53.315397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.333 [2024-12-09 16:00:53.315404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.333 [2024-12-09 16:00:53.315411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.333 [2024-12-09 16:00:53.315426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.333 qpair failed and we were unable to recover it. 00:27:58.333 [2024-12-09 16:00:53.325466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.333 [2024-12-09 16:00:53.325519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.333 [2024-12-09 16:00:53.325533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.333 [2024-12-09 16:00:53.325540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.333 [2024-12-09 16:00:53.325546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.333 [2024-12-09 16:00:53.325560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.333 qpair failed and we were unable to recover it. 00:27:58.333 [2024-12-09 16:00:53.335443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.333 [2024-12-09 16:00:53.335497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.333 [2024-12-09 16:00:53.335510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.333 [2024-12-09 16:00:53.335517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.333 [2024-12-09 16:00:53.335526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.333 [2024-12-09 16:00:53.335540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.333 qpair failed and we were unable to recover it. 00:27:58.333 [2024-12-09 16:00:53.345477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.333 [2024-12-09 16:00:53.345585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.333 [2024-12-09 16:00:53.345599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.333 [2024-12-09 16:00:53.345606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.333 [2024-12-09 16:00:53.345611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.333 [2024-12-09 16:00:53.345625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.333 qpair failed and we were unable to recover it. 00:27:58.333 [2024-12-09 16:00:53.355548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.333 [2024-12-09 16:00:53.355603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.333 [2024-12-09 16:00:53.355616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.333 [2024-12-09 16:00:53.355623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.333 [2024-12-09 16:00:53.355629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.333 [2024-12-09 16:00:53.355644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.333 qpair failed and we were unable to recover it. 00:27:58.333 [2024-12-09 16:00:53.365530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.333 [2024-12-09 16:00:53.365609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.333 [2024-12-09 16:00:53.365622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.333 [2024-12-09 16:00:53.365629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.333 [2024-12-09 16:00:53.365635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.333 [2024-12-09 16:00:53.365649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.333 qpair failed and we were unable to recover it. 00:27:58.333 [2024-12-09 16:00:53.375480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.333 [2024-12-09 16:00:53.375535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.333 [2024-12-09 16:00:53.375548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.333 [2024-12-09 16:00:53.375555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.334 [2024-12-09 16:00:53.375563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.334 [2024-12-09 16:00:53.375579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.334 qpair failed and we were unable to recover it. 00:27:58.334 [2024-12-09 16:00:53.385567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.334 [2024-12-09 16:00:53.385668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.334 [2024-12-09 16:00:53.385682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.334 [2024-12-09 16:00:53.385689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.334 [2024-12-09 16:00:53.385695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.334 [2024-12-09 16:00:53.385709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.334 qpair failed and we were unable to recover it. 00:27:58.334 [2024-12-09 16:00:53.395572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.334 [2024-12-09 16:00:53.395626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.334 [2024-12-09 16:00:53.395640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.334 [2024-12-09 16:00:53.395647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.334 [2024-12-09 16:00:53.395653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.334 [2024-12-09 16:00:53.395667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.334 qpair failed and we were unable to recover it. 00:27:58.334 [2024-12-09 16:00:53.405624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.334 [2024-12-09 16:00:53.405693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.334 [2024-12-09 16:00:53.405707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.334 [2024-12-09 16:00:53.405714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.334 [2024-12-09 16:00:53.405720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.334 [2024-12-09 16:00:53.405734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.334 qpair failed and we were unable to recover it. 00:27:58.334 [2024-12-09 16:00:53.415641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.334 [2024-12-09 16:00:53.415696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.334 [2024-12-09 16:00:53.415710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.334 [2024-12-09 16:00:53.415717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.334 [2024-12-09 16:00:53.415723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.334 [2024-12-09 16:00:53.415738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.334 qpair failed and we were unable to recover it. 00:27:58.334 [2024-12-09 16:00:53.425689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.334 [2024-12-09 16:00:53.425748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.334 [2024-12-09 16:00:53.425761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.334 [2024-12-09 16:00:53.425768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.334 [2024-12-09 16:00:53.425775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.334 [2024-12-09 16:00:53.425789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.334 qpair failed and we were unable to recover it. 00:27:58.334 [2024-12-09 16:00:53.435649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.334 [2024-12-09 16:00:53.435700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.334 [2024-12-09 16:00:53.435713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.334 [2024-12-09 16:00:53.435720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.334 [2024-12-09 16:00:53.435726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.334 [2024-12-09 16:00:53.435740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.334 qpair failed and we were unable to recover it. 00:27:58.334 [2024-12-09 16:00:53.445737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.334 [2024-12-09 16:00:53.445808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.334 [2024-12-09 16:00:53.445822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.334 [2024-12-09 16:00:53.445829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.334 [2024-12-09 16:00:53.445836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.334 [2024-12-09 16:00:53.445851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.334 qpair failed and we were unable to recover it. 00:27:58.334 [2024-12-09 16:00:53.455768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.334 [2024-12-09 16:00:53.455844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.334 [2024-12-09 16:00:53.455858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.334 [2024-12-09 16:00:53.455865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.334 [2024-12-09 16:00:53.455871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.334 [2024-12-09 16:00:53.455886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.334 qpair failed and we were unable to recover it. 00:27:58.334 [2024-12-09 16:00:53.465808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.334 [2024-12-09 16:00:53.465876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.334 [2024-12-09 16:00:53.465889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.334 [2024-12-09 16:00:53.465899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.334 [2024-12-09 16:00:53.465905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.334 [2024-12-09 16:00:53.465919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.334 qpair failed and we were unable to recover it. 00:27:58.334 [2024-12-09 16:00:53.475871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.334 [2024-12-09 16:00:53.475975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.334 [2024-12-09 16:00:53.475989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.334 [2024-12-09 16:00:53.475996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.334 [2024-12-09 16:00:53.476002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.334 [2024-12-09 16:00:53.476016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.334 qpair failed and we were unable to recover it. 00:27:58.334 [2024-12-09 16:00:53.485852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.334 [2024-12-09 16:00:53.485911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.334 [2024-12-09 16:00:53.485924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.334 [2024-12-09 16:00:53.485932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.334 [2024-12-09 16:00:53.485938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.334 [2024-12-09 16:00:53.485952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.334 qpair failed and we were unable to recover it. 00:27:58.334 [2024-12-09 16:00:53.495878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.334 [2024-12-09 16:00:53.495935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.334 [2024-12-09 16:00:53.495949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.334 [2024-12-09 16:00:53.495956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.334 [2024-12-09 16:00:53.495963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.334 [2024-12-09 16:00:53.495977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.334 qpair failed and we were unable to recover it. 00:27:58.334 [2024-12-09 16:00:53.505935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.334 [2024-12-09 16:00:53.505990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.334 [2024-12-09 16:00:53.506003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.335 [2024-12-09 16:00:53.506010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.335 [2024-12-09 16:00:53.506017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.335 [2024-12-09 16:00:53.506035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.335 qpair failed and we were unable to recover it. 00:27:58.335 [2024-12-09 16:00:53.515991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.335 [2024-12-09 16:00:53.516050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.335 [2024-12-09 16:00:53.516062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.335 [2024-12-09 16:00:53.516069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.335 [2024-12-09 16:00:53.516076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.335 [2024-12-09 16:00:53.516090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.335 qpair failed and we were unable to recover it. 00:27:58.335 [2024-12-09 16:00:53.525966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.335 [2024-12-09 16:00:53.526022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.335 [2024-12-09 16:00:53.526035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.335 [2024-12-09 16:00:53.526042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.335 [2024-12-09 16:00:53.526049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.335 [2024-12-09 16:00:53.526064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.335 qpair failed and we were unable to recover it. 00:27:58.335 [2024-12-09 16:00:53.536065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.335 [2024-12-09 16:00:53.536117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.335 [2024-12-09 16:00:53.536130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.335 [2024-12-09 16:00:53.536137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.335 [2024-12-09 16:00:53.536143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.335 [2024-12-09 16:00:53.536158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.335 qpair failed and we were unable to recover it. 00:27:58.335 [2024-12-09 16:00:53.546038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.335 [2024-12-09 16:00:53.546095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.335 [2024-12-09 16:00:53.546108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.335 [2024-12-09 16:00:53.546114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.335 [2024-12-09 16:00:53.546121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.335 [2024-12-09 16:00:53.546134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.335 qpair failed and we were unable to recover it. 00:27:58.335 [2024-12-09 16:00:53.556059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.335 [2024-12-09 16:00:53.556117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.335 [2024-12-09 16:00:53.556132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.335 [2024-12-09 16:00:53.556139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.335 [2024-12-09 16:00:53.556145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.335 [2024-12-09 16:00:53.556159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.335 qpair failed and we were unable to recover it. 00:27:58.595 [2024-12-09 16:00:53.566048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.595 [2024-12-09 16:00:53.566140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.595 [2024-12-09 16:00:53.566155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.595 [2024-12-09 16:00:53.566162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.595 [2024-12-09 16:00:53.566168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.595 [2024-12-09 16:00:53.566182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.595 qpair failed and we were unable to recover it. 00:27:58.595 [2024-12-09 16:00:53.576110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.595 [2024-12-09 16:00:53.576164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.595 [2024-12-09 16:00:53.576178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.595 [2024-12-09 16:00:53.576185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.595 [2024-12-09 16:00:53.576191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.595 [2024-12-09 16:00:53.576206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.595 qpair failed and we were unable to recover it. 00:27:58.595 [2024-12-09 16:00:53.586157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.595 [2024-12-09 16:00:53.586222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.595 [2024-12-09 16:00:53.586235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.595 [2024-12-09 16:00:53.586243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.595 [2024-12-09 16:00:53.586249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.595 [2024-12-09 16:00:53.586264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.595 qpair failed and we were unable to recover it. 00:27:58.595 [2024-12-09 16:00:53.596164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.595 [2024-12-09 16:00:53.596222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.595 [2024-12-09 16:00:53.596236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.595 [2024-12-09 16:00:53.596245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.595 [2024-12-09 16:00:53.596252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.595 [2024-12-09 16:00:53.596267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.595 qpair failed and we were unable to recover it. 00:27:58.595 [2024-12-09 16:00:53.606132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.595 [2024-12-09 16:00:53.606194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.595 [2024-12-09 16:00:53.606208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.595 [2024-12-09 16:00:53.606214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.595 [2024-12-09 16:00:53.606226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.595 [2024-12-09 16:00:53.606241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.595 qpair failed and we were unable to recover it. 00:27:58.595 [2024-12-09 16:00:53.616199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.595 [2024-12-09 16:00:53.616265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.595 [2024-12-09 16:00:53.616279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.595 [2024-12-09 16:00:53.616286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.595 [2024-12-09 16:00:53.616292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.595 [2024-12-09 16:00:53.616307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.595 qpair failed and we were unable to recover it. 00:27:58.595 [2024-12-09 16:00:53.626307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.595 [2024-12-09 16:00:53.626391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.595 [2024-12-09 16:00:53.626404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.595 [2024-12-09 16:00:53.626411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.595 [2024-12-09 16:00:53.626417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.595 [2024-12-09 16:00:53.626432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.595 qpair failed and we were unable to recover it. 00:27:58.595 [2024-12-09 16:00:53.636298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.595 [2024-12-09 16:00:53.636348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.595 [2024-12-09 16:00:53.636362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.595 [2024-12-09 16:00:53.636369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.595 [2024-12-09 16:00:53.636375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.595 [2024-12-09 16:00:53.636395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.595 qpair failed and we were unable to recover it. 00:27:58.595 [2024-12-09 16:00:53.646322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.595 [2024-12-09 16:00:53.646373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.595 [2024-12-09 16:00:53.646387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.595 [2024-12-09 16:00:53.646393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.595 [2024-12-09 16:00:53.646400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.595 [2024-12-09 16:00:53.646414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.595 qpair failed and we were unable to recover it. 00:27:58.595 [2024-12-09 16:00:53.656389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.595 [2024-12-09 16:00:53.656484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.595 [2024-12-09 16:00:53.656498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.596 [2024-12-09 16:00:53.656505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.596 [2024-12-09 16:00:53.656511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.596 [2024-12-09 16:00:53.656525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.596 qpair failed and we were unable to recover it. 00:27:58.596 [2024-12-09 16:00:53.666413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.596 [2024-12-09 16:00:53.666472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.596 [2024-12-09 16:00:53.666485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.596 [2024-12-09 16:00:53.666492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.596 [2024-12-09 16:00:53.666499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.596 [2024-12-09 16:00:53.666514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.596 qpair failed and we were unable to recover it. 00:27:58.596 [2024-12-09 16:00:53.676440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.596 [2024-12-09 16:00:53.676496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.596 [2024-12-09 16:00:53.676509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.596 [2024-12-09 16:00:53.676516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.596 [2024-12-09 16:00:53.676523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.596 [2024-12-09 16:00:53.676537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.596 qpair failed and we were unable to recover it. 00:27:58.596 [2024-12-09 16:00:53.686422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.596 [2024-12-09 16:00:53.686472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.596 [2024-12-09 16:00:53.686485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.596 [2024-12-09 16:00:53.686492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.596 [2024-12-09 16:00:53.686498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.596 [2024-12-09 16:00:53.686513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.596 qpair failed and we were unable to recover it. 00:27:58.596 [2024-12-09 16:00:53.696513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.596 [2024-12-09 16:00:53.696567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.596 [2024-12-09 16:00:53.696580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.596 [2024-12-09 16:00:53.696587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.596 [2024-12-09 16:00:53.696594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.596 [2024-12-09 16:00:53.696609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.596 qpair failed and we were unable to recover it. 00:27:58.596 [2024-12-09 16:00:53.706499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.596 [2024-12-09 16:00:53.706555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.596 [2024-12-09 16:00:53.706568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.596 [2024-12-09 16:00:53.706575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.596 [2024-12-09 16:00:53.706582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.596 [2024-12-09 16:00:53.706596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.596 qpair failed and we were unable to recover it. 00:27:58.596 [2024-12-09 16:00:53.716509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.596 [2024-12-09 16:00:53.716566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.596 [2024-12-09 16:00:53.716579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.596 [2024-12-09 16:00:53.716586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.596 [2024-12-09 16:00:53.716592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.596 [2024-12-09 16:00:53.716607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.596 qpair failed and we were unable to recover it. 00:27:58.596 [2024-12-09 16:00:53.726605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.596 [2024-12-09 16:00:53.726696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.596 [2024-12-09 16:00:53.726712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.596 [2024-12-09 16:00:53.726718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.596 [2024-12-09 16:00:53.726724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.596 [2024-12-09 16:00:53.726739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.596 qpair failed and we were unable to recover it. 00:27:58.596 [2024-12-09 16:00:53.736587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.596 [2024-12-09 16:00:53.736643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.596 [2024-12-09 16:00:53.736655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.596 [2024-12-09 16:00:53.736662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.596 [2024-12-09 16:00:53.736668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.596 [2024-12-09 16:00:53.736682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.596 qpair failed and we were unable to recover it. 00:27:58.596 [2024-12-09 16:00:53.746613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.596 [2024-12-09 16:00:53.746668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.596 [2024-12-09 16:00:53.746680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.596 [2024-12-09 16:00:53.746687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.596 [2024-12-09 16:00:53.746694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.596 [2024-12-09 16:00:53.746708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.596 qpair failed and we were unable to recover it. 00:27:58.596 [2024-12-09 16:00:53.756689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.596 [2024-12-09 16:00:53.756748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.596 [2024-12-09 16:00:53.756762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.596 [2024-12-09 16:00:53.756769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.596 [2024-12-09 16:00:53.756776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.596 [2024-12-09 16:00:53.756790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.596 qpair failed and we were unable to recover it. 00:27:58.596 [2024-12-09 16:00:53.766655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.596 [2024-12-09 16:00:53.766704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.596 [2024-12-09 16:00:53.766717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.596 [2024-12-09 16:00:53.766724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.596 [2024-12-09 16:00:53.766734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.596 [2024-12-09 16:00:53.766748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.596 qpair failed and we were unable to recover it. 00:27:58.597 [2024-12-09 16:00:53.776632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.597 [2024-12-09 16:00:53.776687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.597 [2024-12-09 16:00:53.776700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.597 [2024-12-09 16:00:53.776708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.597 [2024-12-09 16:00:53.776714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.597 [2024-12-09 16:00:53.776729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.597 qpair failed and we were unable to recover it. 00:27:58.597 [2024-12-09 16:00:53.786641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.597 [2024-12-09 16:00:53.786703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.597 [2024-12-09 16:00:53.786717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.597 [2024-12-09 16:00:53.786724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.597 [2024-12-09 16:00:53.786730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.597 [2024-12-09 16:00:53.786744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.597 qpair failed and we were unable to recover it. 00:27:58.597 [2024-12-09 16:00:53.796719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.597 [2024-12-09 16:00:53.796778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.597 [2024-12-09 16:00:53.796792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.597 [2024-12-09 16:00:53.796800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.597 [2024-12-09 16:00:53.796806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.597 [2024-12-09 16:00:53.796821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.597 qpair failed and we were unable to recover it. 00:27:58.597 [2024-12-09 16:00:53.806761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.597 [2024-12-09 16:00:53.806854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.597 [2024-12-09 16:00:53.806868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.597 [2024-12-09 16:00:53.806877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.597 [2024-12-09 16:00:53.806884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.597 [2024-12-09 16:00:53.806899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.597 qpair failed and we were unable to recover it. 00:27:58.597 [2024-12-09 16:00:53.816866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.597 [2024-12-09 16:00:53.816956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.597 [2024-12-09 16:00:53.816970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.597 [2024-12-09 16:00:53.816978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.597 [2024-12-09 16:00:53.816984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.597 [2024-12-09 16:00:53.816999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.597 qpair failed and we were unable to recover it. 00:27:58.857 [2024-12-09 16:00:53.826767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.857 [2024-12-09 16:00:53.826823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.857 [2024-12-09 16:00:53.826837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.857 [2024-12-09 16:00:53.826844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.857 [2024-12-09 16:00:53.826851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.857 [2024-12-09 16:00:53.826866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.857 qpair failed and we were unable to recover it. 00:27:58.857 [2024-12-09 16:00:53.836881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.857 [2024-12-09 16:00:53.836938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.857 [2024-12-09 16:00:53.836952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.857 [2024-12-09 16:00:53.836959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.857 [2024-12-09 16:00:53.836966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.857 [2024-12-09 16:00:53.836981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.857 qpair failed and we were unable to recover it. 00:27:58.857 [2024-12-09 16:00:53.846799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.857 [2024-12-09 16:00:53.846866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.857 [2024-12-09 16:00:53.846879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.857 [2024-12-09 16:00:53.846886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.857 [2024-12-09 16:00:53.846892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.857 [2024-12-09 16:00:53.846907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.857 qpair failed and we were unable to recover it. 00:27:58.857 [2024-12-09 16:00:53.856832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.857 [2024-12-09 16:00:53.856896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.857 [2024-12-09 16:00:53.856913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.857 [2024-12-09 16:00:53.856921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.857 [2024-12-09 16:00:53.856927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.857 [2024-12-09 16:00:53.856941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.857 qpair failed and we were unable to recover it. 00:27:58.857 [2024-12-09 16:00:53.866946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.857 [2024-12-09 16:00:53.867003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.857 [2024-12-09 16:00:53.867016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.857 [2024-12-09 16:00:53.867024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.857 [2024-12-09 16:00:53.867030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.857 [2024-12-09 16:00:53.867044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.857 qpair failed and we were unable to recover it. 00:27:58.857 [2024-12-09 16:00:53.876958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.857 [2024-12-09 16:00:53.877016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.857 [2024-12-09 16:00:53.877029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.857 [2024-12-09 16:00:53.877036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.857 [2024-12-09 16:00:53.877042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.857 [2024-12-09 16:00:53.877056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.857 qpair failed and we were unable to recover it. 00:27:58.857 [2024-12-09 16:00:53.886913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.857 [2024-12-09 16:00:53.886967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.857 [2024-12-09 16:00:53.886980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.857 [2024-12-09 16:00:53.886988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.857 [2024-12-09 16:00:53.886995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.857 [2024-12-09 16:00:53.887010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.857 qpair failed and we were unable to recover it. 00:27:58.857 [2024-12-09 16:00:53.896929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.857 [2024-12-09 16:00:53.896991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.857 [2024-12-09 16:00:53.897004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.857 [2024-12-09 16:00:53.897012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.857 [2024-12-09 16:00:53.897021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.857 [2024-12-09 16:00:53.897036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.857 qpair failed and we were unable to recover it. 00:27:58.857 [2024-12-09 16:00:53.907047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.857 [2024-12-09 16:00:53.907112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.857 [2024-12-09 16:00:53.907125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.857 [2024-12-09 16:00:53.907132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.857 [2024-12-09 16:00:53.907138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.857 [2024-12-09 16:00:53.907152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.857 qpair failed and we were unable to recover it. 00:27:58.857 [2024-12-09 16:00:53.917137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.857 [2024-12-09 16:00:53.917231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.857 [2024-12-09 16:00:53.917246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.857 [2024-12-09 16:00:53.917253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.857 [2024-12-09 16:00:53.917259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.857 [2024-12-09 16:00:53.917274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.857 qpair failed and we were unable to recover it. 00:27:58.857 [2024-12-09 16:00:53.927057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.857 [2024-12-09 16:00:53.927139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.857 [2024-12-09 16:00:53.927152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.858 [2024-12-09 16:00:53.927159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.858 [2024-12-09 16:00:53.927166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.858 [2024-12-09 16:00:53.927180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.858 qpair failed and we were unable to recover it. 00:27:58.858 [2024-12-09 16:00:53.937155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.858 [2024-12-09 16:00:53.937222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.858 [2024-12-09 16:00:53.937235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.858 [2024-12-09 16:00:53.937242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.858 [2024-12-09 16:00:53.937248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.858 [2024-12-09 16:00:53.937263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.858 qpair failed and we were unable to recover it. 00:27:58.858 [2024-12-09 16:00:53.947157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.858 [2024-12-09 16:00:53.947220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.858 [2024-12-09 16:00:53.947233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.858 [2024-12-09 16:00:53.947240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.858 [2024-12-09 16:00:53.947246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.858 [2024-12-09 16:00:53.947261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.858 qpair failed and we were unable to recover it. 00:27:58.858 [2024-12-09 16:00:53.957121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.858 [2024-12-09 16:00:53.957200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.858 [2024-12-09 16:00:53.957214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.858 [2024-12-09 16:00:53.957225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.858 [2024-12-09 16:00:53.957231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.858 [2024-12-09 16:00:53.957246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.858 qpair failed and we were unable to recover it. 00:27:58.858 [2024-12-09 16:00:53.967128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.858 [2024-12-09 16:00:53.967183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.858 [2024-12-09 16:00:53.967196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.858 [2024-12-09 16:00:53.967203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.858 [2024-12-09 16:00:53.967209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.858 [2024-12-09 16:00:53.967228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.858 qpair failed and we were unable to recover it. 00:27:58.858 [2024-12-09 16:00:53.977282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.858 [2024-12-09 16:00:53.977358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.858 [2024-12-09 16:00:53.977371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.858 [2024-12-09 16:00:53.977378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.858 [2024-12-09 16:00:53.977384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.858 [2024-12-09 16:00:53.977399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.858 qpair failed and we were unable to recover it. 00:27:58.858 [2024-12-09 16:00:53.987274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.858 [2024-12-09 16:00:53.987348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.858 [2024-12-09 16:00:53.987362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.858 [2024-12-09 16:00:53.987369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.858 [2024-12-09 16:00:53.987375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.858 [2024-12-09 16:00:53.987390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.858 qpair failed and we were unable to recover it. 00:27:58.858 [2024-12-09 16:00:53.997322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.858 [2024-12-09 16:00:53.997377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.858 [2024-12-09 16:00:53.997391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.858 [2024-12-09 16:00:53.997398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.858 [2024-12-09 16:00:53.997404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.858 [2024-12-09 16:00:53.997418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.858 qpair failed and we were unable to recover it. 00:27:58.858 [2024-12-09 16:00:54.007339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.858 [2024-12-09 16:00:54.007395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.858 [2024-12-09 16:00:54.007409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.858 [2024-12-09 16:00:54.007416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.858 [2024-12-09 16:00:54.007422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.858 [2024-12-09 16:00:54.007437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.858 qpair failed and we were unable to recover it. 00:27:58.858 [2024-12-09 16:00:54.017310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.858 [2024-12-09 16:00:54.017375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.858 [2024-12-09 16:00:54.017388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.858 [2024-12-09 16:00:54.017395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.858 [2024-12-09 16:00:54.017401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.858 [2024-12-09 16:00:54.017416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.858 qpair failed and we were unable to recover it. 00:27:58.858 [2024-12-09 16:00:54.027340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.858 [2024-12-09 16:00:54.027415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.858 [2024-12-09 16:00:54.027429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.858 [2024-12-09 16:00:54.027439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.858 [2024-12-09 16:00:54.027445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.858 [2024-12-09 16:00:54.027460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.858 qpair failed and we were unable to recover it. 00:27:58.858 [2024-12-09 16:00:54.037430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.858 [2024-12-09 16:00:54.037488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.858 [2024-12-09 16:00:54.037500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.858 [2024-12-09 16:00:54.037507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.858 [2024-12-09 16:00:54.037514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.858 [2024-12-09 16:00:54.037529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.858 qpair failed and we were unable to recover it. 00:27:58.858 [2024-12-09 16:00:54.047454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.858 [2024-12-09 16:00:54.047513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.858 [2024-12-09 16:00:54.047527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.858 [2024-12-09 16:00:54.047534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.858 [2024-12-09 16:00:54.047540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.858 [2024-12-09 16:00:54.047555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.858 qpair failed and we were unable to recover it. 00:27:58.858 [2024-12-09 16:00:54.057455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.858 [2024-12-09 16:00:54.057506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.858 [2024-12-09 16:00:54.057519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.858 [2024-12-09 16:00:54.057526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.858 [2024-12-09 16:00:54.057533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.859 [2024-12-09 16:00:54.057548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.859 qpair failed and we were unable to recover it. 00:27:58.859 [2024-12-09 16:00:54.067478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.859 [2024-12-09 16:00:54.067537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.859 [2024-12-09 16:00:54.067550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.859 [2024-12-09 16:00:54.067558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.859 [2024-12-09 16:00:54.067564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.859 [2024-12-09 16:00:54.067582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.859 qpair failed and we were unable to recover it. 00:27:58.859 [2024-12-09 16:00:54.077446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.859 [2024-12-09 16:00:54.077509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.859 [2024-12-09 16:00:54.077522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.859 [2024-12-09 16:00:54.077529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.859 [2024-12-09 16:00:54.077536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:58.859 [2024-12-09 16:00:54.077550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.859 qpair failed and we were unable to recover it. 00:27:59.119 [2024-12-09 16:00:54.087625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.119 [2024-12-09 16:00:54.087682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.119 [2024-12-09 16:00:54.087694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.119 [2024-12-09 16:00:54.087701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.119 [2024-12-09 16:00:54.087707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.119 [2024-12-09 16:00:54.087721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.119 qpair failed and we were unable to recover it. 00:27:59.119 [2024-12-09 16:00:54.097505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.119 [2024-12-09 16:00:54.097561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.119 [2024-12-09 16:00:54.097574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.119 [2024-12-09 16:00:54.097581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.119 [2024-12-09 16:00:54.097587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.119 [2024-12-09 16:00:54.097602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.119 qpair failed and we were unable to recover it. 00:27:59.119 [2024-12-09 16:00:54.107601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.119 [2024-12-09 16:00:54.107693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.119 [2024-12-09 16:00:54.107707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.119 [2024-12-09 16:00:54.107713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.119 [2024-12-09 16:00:54.107719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.119 [2024-12-09 16:00:54.107734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.119 qpair failed and we were unable to recover it. 00:27:59.119 [2024-12-09 16:00:54.117618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.119 [2024-12-09 16:00:54.117679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.119 [2024-12-09 16:00:54.117693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.119 [2024-12-09 16:00:54.117700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.119 [2024-12-09 16:00:54.117705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.119 [2024-12-09 16:00:54.117721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.119 qpair failed and we were unable to recover it. 00:27:59.119 [2024-12-09 16:00:54.127669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.119 [2024-12-09 16:00:54.127735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.119 [2024-12-09 16:00:54.127748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.119 [2024-12-09 16:00:54.127755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.119 [2024-12-09 16:00:54.127761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.119 [2024-12-09 16:00:54.127775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.119 qpair failed and we were unable to recover it. 00:27:59.119 [2024-12-09 16:00:54.137670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.119 [2024-12-09 16:00:54.137749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.119 [2024-12-09 16:00:54.137763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.119 [2024-12-09 16:00:54.137770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.119 [2024-12-09 16:00:54.137776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.119 [2024-12-09 16:00:54.137791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.119 qpair failed and we were unable to recover it. 00:27:59.119 [2024-12-09 16:00:54.147712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.119 [2024-12-09 16:00:54.147781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.119 [2024-12-09 16:00:54.147794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.119 [2024-12-09 16:00:54.147801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.119 [2024-12-09 16:00:54.147808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.119 [2024-12-09 16:00:54.147822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.119 qpair failed and we were unable to recover it. 00:27:59.119 [2024-12-09 16:00:54.157722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.119 [2024-12-09 16:00:54.157779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.119 [2024-12-09 16:00:54.157795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.119 [2024-12-09 16:00:54.157802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.119 [2024-12-09 16:00:54.157809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.119 [2024-12-09 16:00:54.157823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.119 qpair failed and we were unable to recover it. 00:27:59.119 [2024-12-09 16:00:54.167778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.119 [2024-12-09 16:00:54.167834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.119 [2024-12-09 16:00:54.167847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.119 [2024-12-09 16:00:54.167854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.119 [2024-12-09 16:00:54.167861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.119 [2024-12-09 16:00:54.167876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.119 qpair failed and we were unable to recover it. 00:27:59.119 [2024-12-09 16:00:54.177768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.119 [2024-12-09 16:00:54.177823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.119 [2024-12-09 16:00:54.177835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.119 [2024-12-09 16:00:54.177842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.119 [2024-12-09 16:00:54.177849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.119 [2024-12-09 16:00:54.177863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.120 qpair failed and we were unable to recover it. 00:27:59.120 [2024-12-09 16:00:54.187773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.120 [2024-12-09 16:00:54.187829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.120 [2024-12-09 16:00:54.187843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.120 [2024-12-09 16:00:54.187849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.120 [2024-12-09 16:00:54.187855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.120 [2024-12-09 16:00:54.187870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.120 qpair failed and we were unable to recover it. 00:27:59.120 [2024-12-09 16:00:54.197873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.120 [2024-12-09 16:00:54.197930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.120 [2024-12-09 16:00:54.197944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.120 [2024-12-09 16:00:54.197950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.120 [2024-12-09 16:00:54.197957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.120 [2024-12-09 16:00:54.197974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.120 qpair failed and we were unable to recover it. 00:27:59.120 [2024-12-09 16:00:54.207925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.120 [2024-12-09 16:00:54.207980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.120 [2024-12-09 16:00:54.207992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.120 [2024-12-09 16:00:54.207999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.120 [2024-12-09 16:00:54.208006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.120 [2024-12-09 16:00:54.208020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.120 qpair failed and we were unable to recover it. 00:27:59.120 [2024-12-09 16:00:54.217962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.120 [2024-12-09 16:00:54.218016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.120 [2024-12-09 16:00:54.218029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.120 [2024-12-09 16:00:54.218036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.120 [2024-12-09 16:00:54.218042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.120 [2024-12-09 16:00:54.218056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.120 qpair failed and we were unable to recover it. 00:27:59.120 [2024-12-09 16:00:54.227955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.120 [2024-12-09 16:00:54.228008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.120 [2024-12-09 16:00:54.228022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.120 [2024-12-09 16:00:54.228029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.120 [2024-12-09 16:00:54.228035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.120 [2024-12-09 16:00:54.228050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.120 qpair failed and we were unable to recover it. 00:27:59.120 [2024-12-09 16:00:54.237991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.120 [2024-12-09 16:00:54.238050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.120 [2024-12-09 16:00:54.238063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.120 [2024-12-09 16:00:54.238070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.120 [2024-12-09 16:00:54.238077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.120 [2024-12-09 16:00:54.238091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.120 qpair failed and we were unable to recover it. 00:27:59.120 [2024-12-09 16:00:54.247930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.120 [2024-12-09 16:00:54.247988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.120 [2024-12-09 16:00:54.248001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.120 [2024-12-09 16:00:54.248008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.120 [2024-12-09 16:00:54.248015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.120 [2024-12-09 16:00:54.248029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.120 qpair failed and we were unable to recover it. 00:27:59.120 [2024-12-09 16:00:54.258022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.120 [2024-12-09 16:00:54.258095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.120 [2024-12-09 16:00:54.258108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.120 [2024-12-09 16:00:54.258115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.120 [2024-12-09 16:00:54.258121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.120 [2024-12-09 16:00:54.258136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.120 qpair failed and we were unable to recover it. 00:27:59.120 [2024-12-09 16:00:54.268089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.120 [2024-12-09 16:00:54.268159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.120 [2024-12-09 16:00:54.268173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.120 [2024-12-09 16:00:54.268180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.120 [2024-12-09 16:00:54.268186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.120 [2024-12-09 16:00:54.268200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.120 qpair failed and we were unable to recover it. 00:27:59.120 [2024-12-09 16:00:54.278089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.120 [2024-12-09 16:00:54.278155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.120 [2024-12-09 16:00:54.278168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.120 [2024-12-09 16:00:54.278176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.120 [2024-12-09 16:00:54.278182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.120 [2024-12-09 16:00:54.278196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.120 qpair failed and we were unable to recover it. 00:27:59.120 [2024-12-09 16:00:54.288115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.120 [2024-12-09 16:00:54.288180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.120 [2024-12-09 16:00:54.288196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.120 [2024-12-09 16:00:54.288203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.120 [2024-12-09 16:00:54.288209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.120 [2024-12-09 16:00:54.288227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.120 qpair failed and we were unable to recover it. 00:27:59.120 [2024-12-09 16:00:54.298141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.120 [2024-12-09 16:00:54.298196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.120 [2024-12-09 16:00:54.298209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.120 [2024-12-09 16:00:54.298220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.120 [2024-12-09 16:00:54.298228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.120 [2024-12-09 16:00:54.298243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.120 qpair failed and we were unable to recover it. 00:27:59.120 [2024-12-09 16:00:54.308175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.120 [2024-12-09 16:00:54.308235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.120 [2024-12-09 16:00:54.308248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.120 [2024-12-09 16:00:54.308255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.120 [2024-12-09 16:00:54.308262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.120 [2024-12-09 16:00:54.308277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.120 qpair failed and we were unable to recover it. 00:27:59.120 [2024-12-09 16:00:54.318197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.120 [2024-12-09 16:00:54.318313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.121 [2024-12-09 16:00:54.318326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.121 [2024-12-09 16:00:54.318333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.121 [2024-12-09 16:00:54.318340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.121 [2024-12-09 16:00:54.318354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.121 qpair failed and we were unable to recover it. 00:27:59.121 [2024-12-09 16:00:54.328225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.121 [2024-12-09 16:00:54.328279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.121 [2024-12-09 16:00:54.328292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.121 [2024-12-09 16:00:54.328299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.121 [2024-12-09 16:00:54.328309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.121 [2024-12-09 16:00:54.328323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.121 qpair failed and we were unable to recover it. 00:27:59.121 [2024-12-09 16:00:54.338264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.121 [2024-12-09 16:00:54.338319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.121 [2024-12-09 16:00:54.338332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.121 [2024-12-09 16:00:54.338339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.121 [2024-12-09 16:00:54.338345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.121 [2024-12-09 16:00:54.338360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.121 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-09 16:00:54.348291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.381 [2024-12-09 16:00:54.348346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.381 [2024-12-09 16:00:54.348359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.381 [2024-12-09 16:00:54.348366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.381 [2024-12-09 16:00:54.348373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.381 [2024-12-09 16:00:54.348387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-09 16:00:54.358359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.381 [2024-12-09 16:00:54.358416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.381 [2024-12-09 16:00:54.358429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.381 [2024-12-09 16:00:54.358436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.381 [2024-12-09 16:00:54.358442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.381 [2024-12-09 16:00:54.358457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-09 16:00:54.368334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.381 [2024-12-09 16:00:54.368404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.381 [2024-12-09 16:00:54.368417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.381 [2024-12-09 16:00:54.368424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.381 [2024-12-09 16:00:54.368430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.381 [2024-12-09 16:00:54.368444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-09 16:00:54.378413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.381 [2024-12-09 16:00:54.378477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.381 [2024-12-09 16:00:54.378491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.381 [2024-12-09 16:00:54.378498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.381 [2024-12-09 16:00:54.378504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.381 [2024-12-09 16:00:54.378518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-09 16:00:54.388407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.381 [2024-12-09 16:00:54.388489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.381 [2024-12-09 16:00:54.388502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.381 [2024-12-09 16:00:54.388509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.381 [2024-12-09 16:00:54.388515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.381 [2024-12-09 16:00:54.388529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-09 16:00:54.398465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.381 [2024-12-09 16:00:54.398525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.381 [2024-12-09 16:00:54.398538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.381 [2024-12-09 16:00:54.398545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.381 [2024-12-09 16:00:54.398551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.381 [2024-12-09 16:00:54.398565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-09 16:00:54.408460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.381 [2024-12-09 16:00:54.408514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.381 [2024-12-09 16:00:54.408527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.381 [2024-12-09 16:00:54.408534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.381 [2024-12-09 16:00:54.408540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.381 [2024-12-09 16:00:54.408556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-09 16:00:54.418465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.381 [2024-12-09 16:00:54.418519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.381 [2024-12-09 16:00:54.418536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.381 [2024-12-09 16:00:54.418543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.381 [2024-12-09 16:00:54.418549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.381 [2024-12-09 16:00:54.418564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-09 16:00:54.428503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.381 [2024-12-09 16:00:54.428562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.381 [2024-12-09 16:00:54.428574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.381 [2024-12-09 16:00:54.428581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.381 [2024-12-09 16:00:54.428587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.381 [2024-12-09 16:00:54.428601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.381 qpair failed and we were unable to recover it. 00:27:59.381 [2024-12-09 16:00:54.438549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.381 [2024-12-09 16:00:54.438606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.381 [2024-12-09 16:00:54.438619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.381 [2024-12-09 16:00:54.438626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.382 [2024-12-09 16:00:54.438632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.382 [2024-12-09 16:00:54.438647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-09 16:00:54.448577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.382 [2024-12-09 16:00:54.448634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.382 [2024-12-09 16:00:54.448646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.382 [2024-12-09 16:00:54.448653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.382 [2024-12-09 16:00:54.448659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.382 [2024-12-09 16:00:54.448673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-09 16:00:54.458581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.382 [2024-12-09 16:00:54.458656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.382 [2024-12-09 16:00:54.458669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.382 [2024-12-09 16:00:54.458675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.382 [2024-12-09 16:00:54.458684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.382 [2024-12-09 16:00:54.458698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-09 16:00:54.468665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.382 [2024-12-09 16:00:54.468722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.382 [2024-12-09 16:00:54.468734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.382 [2024-12-09 16:00:54.468741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.382 [2024-12-09 16:00:54.468748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.382 [2024-12-09 16:00:54.468762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-09 16:00:54.478631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.382 [2024-12-09 16:00:54.478687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.382 [2024-12-09 16:00:54.478701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.382 [2024-12-09 16:00:54.478707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.382 [2024-12-09 16:00:54.478714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.382 [2024-12-09 16:00:54.478728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-09 16:00:54.488670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.382 [2024-12-09 16:00:54.488726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.382 [2024-12-09 16:00:54.488739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.382 [2024-12-09 16:00:54.488745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.382 [2024-12-09 16:00:54.488752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.382 [2024-12-09 16:00:54.488767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-09 16:00:54.498730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.382 [2024-12-09 16:00:54.498787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.382 [2024-12-09 16:00:54.498800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.382 [2024-12-09 16:00:54.498807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.382 [2024-12-09 16:00:54.498813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.382 [2024-12-09 16:00:54.498827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-09 16:00:54.508732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.382 [2024-12-09 16:00:54.508789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.382 [2024-12-09 16:00:54.508802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.382 [2024-12-09 16:00:54.508809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.382 [2024-12-09 16:00:54.508816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.382 [2024-12-09 16:00:54.508831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-09 16:00:54.518762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.382 [2024-12-09 16:00:54.518817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.382 [2024-12-09 16:00:54.518830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.382 [2024-12-09 16:00:54.518837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.382 [2024-12-09 16:00:54.518843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.382 [2024-12-09 16:00:54.518858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-09 16:00:54.528778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.382 [2024-12-09 16:00:54.528830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.382 [2024-12-09 16:00:54.528844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.382 [2024-12-09 16:00:54.528850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.382 [2024-12-09 16:00:54.528857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.382 [2024-12-09 16:00:54.528871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-09 16:00:54.538868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.382 [2024-12-09 16:00:54.538924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.382 [2024-12-09 16:00:54.538936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.382 [2024-12-09 16:00:54.538943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.382 [2024-12-09 16:00:54.538949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.382 [2024-12-09 16:00:54.538964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-09 16:00:54.548910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.382 [2024-12-09 16:00:54.549018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.382 [2024-12-09 16:00:54.549031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.382 [2024-12-09 16:00:54.549038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.382 [2024-12-09 16:00:54.549044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.382 [2024-12-09 16:00:54.549058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-09 16:00:54.558859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.382 [2024-12-09 16:00:54.558918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.382 [2024-12-09 16:00:54.558931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.382 [2024-12-09 16:00:54.558939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.382 [2024-12-09 16:00:54.558946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.382 [2024-12-09 16:00:54.558960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.382 qpair failed and we were unable to recover it. 00:27:59.382 [2024-12-09 16:00:54.568900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.382 [2024-12-09 16:00:54.568973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.383 [2024-12-09 16:00:54.568986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.383 [2024-12-09 16:00:54.568993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.383 [2024-12-09 16:00:54.568999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.383 [2024-12-09 16:00:54.569014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-09 16:00:54.578996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.383 [2024-12-09 16:00:54.579080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.383 [2024-12-09 16:00:54.579093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.383 [2024-12-09 16:00:54.579100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.383 [2024-12-09 16:00:54.579106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.383 [2024-12-09 16:00:54.579120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-09 16:00:54.588957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.383 [2024-12-09 16:00:54.589013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.383 [2024-12-09 16:00:54.589026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.383 [2024-12-09 16:00:54.589036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.383 [2024-12-09 16:00:54.589042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.383 [2024-12-09 16:00:54.589057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.383 [2024-12-09 16:00:54.599008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.383 [2024-12-09 16:00:54.599065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.383 [2024-12-09 16:00:54.599078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.383 [2024-12-09 16:00:54.599085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.383 [2024-12-09 16:00:54.599091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.383 [2024-12-09 16:00:54.599106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.383 qpair failed and we were unable to recover it. 00:27:59.643 [2024-12-09 16:00:54.609076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.643 [2024-12-09 16:00:54.609131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.643 [2024-12-09 16:00:54.609144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.643 [2024-12-09 16:00:54.609151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.643 [2024-12-09 16:00:54.609157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.643 [2024-12-09 16:00:54.609171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.643 qpair failed and we were unable to recover it. 00:27:59.643 [2024-12-09 16:00:54.619032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.643 [2024-12-09 16:00:54.619085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.643 [2024-12-09 16:00:54.619099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.643 [2024-12-09 16:00:54.619105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.643 [2024-12-09 16:00:54.619112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.643 [2024-12-09 16:00:54.619126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.643 qpair failed and we were unable to recover it. 00:27:59.643 [2024-12-09 16:00:54.629089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.643 [2024-12-09 16:00:54.629155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.643 [2024-12-09 16:00:54.629167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.643 [2024-12-09 16:00:54.629174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.643 [2024-12-09 16:00:54.629181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.643 [2024-12-09 16:00:54.629197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.643 qpair failed and we were unable to recover it. 00:27:59.643 [2024-12-09 16:00:54.639147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.643 [2024-12-09 16:00:54.639209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.643 [2024-12-09 16:00:54.639226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.643 [2024-12-09 16:00:54.639233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.643 [2024-12-09 16:00:54.639240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.643 [2024-12-09 16:00:54.639255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.643 qpair failed and we were unable to recover it. 00:27:59.643 [2024-12-09 16:00:54.649136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.643 [2024-12-09 16:00:54.649187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.643 [2024-12-09 16:00:54.649201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.643 [2024-12-09 16:00:54.649208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.643 [2024-12-09 16:00:54.649215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.643 [2024-12-09 16:00:54.649235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.643 qpair failed and we were unable to recover it. 00:27:59.643 [2024-12-09 16:00:54.659079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.643 [2024-12-09 16:00:54.659138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.643 [2024-12-09 16:00:54.659150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.643 [2024-12-09 16:00:54.659158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.643 [2024-12-09 16:00:54.659164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.643 [2024-12-09 16:00:54.659178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.643 qpair failed and we were unable to recover it. 00:27:59.643 [2024-12-09 16:00:54.669181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.643 [2024-12-09 16:00:54.669247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.643 [2024-12-09 16:00:54.669261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.643 [2024-12-09 16:00:54.669268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.643 [2024-12-09 16:00:54.669275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.643 [2024-12-09 16:00:54.669289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.643 qpair failed and we were unable to recover it. 00:27:59.643 [2024-12-09 16:00:54.679205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.643 [2024-12-09 16:00:54.679271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.643 [2024-12-09 16:00:54.679285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.643 [2024-12-09 16:00:54.679292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.643 [2024-12-09 16:00:54.679299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.643 [2024-12-09 16:00:54.679313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.643 qpair failed and we were unable to recover it. 00:27:59.643 [2024-12-09 16:00:54.689290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.643 [2024-12-09 16:00:54.689353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.643 [2024-12-09 16:00:54.689366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.643 [2024-12-09 16:00:54.689373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.643 [2024-12-09 16:00:54.689380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.643 [2024-12-09 16:00:54.689394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.643 qpair failed and we were unable to recover it. 00:27:59.643 [2024-12-09 16:00:54.699268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.643 [2024-12-09 16:00:54.699329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.643 [2024-12-09 16:00:54.699342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.643 [2024-12-09 16:00:54.699350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.643 [2024-12-09 16:00:54.699356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.643 [2024-12-09 16:00:54.699371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.643 qpair failed and we were unable to recover it. 00:27:59.643 [2024-12-09 16:00:54.709343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.643 [2024-12-09 16:00:54.709407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.643 [2024-12-09 16:00:54.709420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.643 [2024-12-09 16:00:54.709427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.643 [2024-12-09 16:00:54.709433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.644 [2024-12-09 16:00:54.709449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.644 qpair failed and we were unable to recover it. 00:27:59.644 [2024-12-09 16:00:54.719328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.644 [2024-12-09 16:00:54.719401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.644 [2024-12-09 16:00:54.719417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.644 [2024-12-09 16:00:54.719425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.644 [2024-12-09 16:00:54.719431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.644 [2024-12-09 16:00:54.719446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.644 qpair failed and we were unable to recover it. 00:27:59.644 [2024-12-09 16:00:54.729341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.644 [2024-12-09 16:00:54.729401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.644 [2024-12-09 16:00:54.729414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.644 [2024-12-09 16:00:54.729422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.644 [2024-12-09 16:00:54.729428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.644 [2024-12-09 16:00:54.729443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.644 qpair failed and we were unable to recover it. 00:27:59.644 [2024-12-09 16:00:54.739374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.644 [2024-12-09 16:00:54.739448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.644 [2024-12-09 16:00:54.739461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.644 [2024-12-09 16:00:54.739468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.644 [2024-12-09 16:00:54.739474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.644 [2024-12-09 16:00:54.739489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.644 qpair failed and we were unable to recover it. 00:27:59.644 [2024-12-09 16:00:54.749413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.644 [2024-12-09 16:00:54.749500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.644 [2024-12-09 16:00:54.749514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.644 [2024-12-09 16:00:54.749521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.644 [2024-12-09 16:00:54.749527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.644 [2024-12-09 16:00:54.749541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.644 qpair failed and we were unable to recover it. 00:27:59.644 [2024-12-09 16:00:54.759490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.644 [2024-12-09 16:00:54.759552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.644 [2024-12-09 16:00:54.759565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.644 [2024-12-09 16:00:54.759573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.644 [2024-12-09 16:00:54.759579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.644 [2024-12-09 16:00:54.759596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.644 qpair failed and we were unable to recover it. 00:27:59.644 [2024-12-09 16:00:54.769449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.644 [2024-12-09 16:00:54.769503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.644 [2024-12-09 16:00:54.769517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.644 [2024-12-09 16:00:54.769524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.644 [2024-12-09 16:00:54.769530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.644 [2024-12-09 16:00:54.769546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.644 qpair failed and we were unable to recover it. 00:27:59.644 [2024-12-09 16:00:54.779482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.644 [2024-12-09 16:00:54.779559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.644 [2024-12-09 16:00:54.779572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.644 [2024-12-09 16:00:54.779579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.644 [2024-12-09 16:00:54.779585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.644 [2024-12-09 16:00:54.779600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.644 qpair failed and we were unable to recover it. 00:27:59.644 [2024-12-09 16:00:54.789487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.644 [2024-12-09 16:00:54.789554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.644 [2024-12-09 16:00:54.789567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.644 [2024-12-09 16:00:54.789574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.644 [2024-12-09 16:00:54.789581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.644 [2024-12-09 16:00:54.789594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.644 qpair failed and we were unable to recover it. 00:27:59.644 [2024-12-09 16:00:54.799529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.644 [2024-12-09 16:00:54.799585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.644 [2024-12-09 16:00:54.799598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.644 [2024-12-09 16:00:54.799605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.644 [2024-12-09 16:00:54.799611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.644 [2024-12-09 16:00:54.799626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.644 qpair failed and we were unable to recover it. 00:27:59.644 [2024-12-09 16:00:54.809560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.644 [2024-12-09 16:00:54.809612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.644 [2024-12-09 16:00:54.809625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.644 [2024-12-09 16:00:54.809632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.644 [2024-12-09 16:00:54.809638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.644 [2024-12-09 16:00:54.809652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.644 qpair failed and we were unable to recover it. 00:27:59.644 [2024-12-09 16:00:54.819580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.644 [2024-12-09 16:00:54.819635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.644 [2024-12-09 16:00:54.819648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.644 [2024-12-09 16:00:54.819655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.644 [2024-12-09 16:00:54.819661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.644 [2024-12-09 16:00:54.819676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.644 qpair failed and we were unable to recover it. 00:27:59.644 [2024-12-09 16:00:54.829623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.644 [2024-12-09 16:00:54.829683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.644 [2024-12-09 16:00:54.829696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.644 [2024-12-09 16:00:54.829703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.644 [2024-12-09 16:00:54.829709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.644 [2024-12-09 16:00:54.829723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.644 qpair failed and we were unable to recover it. 00:27:59.644 [2024-12-09 16:00:54.839696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.644 [2024-12-09 16:00:54.839759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.644 [2024-12-09 16:00:54.839772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.644 [2024-12-09 16:00:54.839779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.644 [2024-12-09 16:00:54.839785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.644 [2024-12-09 16:00:54.839799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.644 qpair failed and we were unable to recover it. 00:27:59.644 [2024-12-09 16:00:54.849675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.645 [2024-12-09 16:00:54.849729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.645 [2024-12-09 16:00:54.849745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.645 [2024-12-09 16:00:54.849752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.645 [2024-12-09 16:00:54.849758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.645 [2024-12-09 16:00:54.849773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.645 qpair failed and we were unable to recover it. 00:27:59.645 [2024-12-09 16:00:54.859703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.645 [2024-12-09 16:00:54.859757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.645 [2024-12-09 16:00:54.859770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.645 [2024-12-09 16:00:54.859777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.645 [2024-12-09 16:00:54.859783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.645 [2024-12-09 16:00:54.859797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.645 qpair failed and we were unable to recover it. 00:27:59.905 [2024-12-09 16:00:54.869747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.905 [2024-12-09 16:00:54.869801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.905 [2024-12-09 16:00:54.869814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.905 [2024-12-09 16:00:54.869821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.905 [2024-12-09 16:00:54.869828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.905 [2024-12-09 16:00:54.869842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.905 qpair failed and we were unable to recover it. 00:27:59.905 [2024-12-09 16:00:54.879778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.905 [2024-12-09 16:00:54.879866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.905 [2024-12-09 16:00:54.879880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.905 [2024-12-09 16:00:54.879887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.905 [2024-12-09 16:00:54.879893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.905 [2024-12-09 16:00:54.879907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.905 qpair failed and we were unable to recover it. 00:27:59.905 [2024-12-09 16:00:54.889785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.905 [2024-12-09 16:00:54.889842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.905 [2024-12-09 16:00:54.889855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.905 [2024-12-09 16:00:54.889861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.905 [2024-12-09 16:00:54.889871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.905 [2024-12-09 16:00:54.889886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.905 qpair failed and we were unable to recover it. 00:27:59.905 [2024-12-09 16:00:54.899806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.905 [2024-12-09 16:00:54.899889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.905 [2024-12-09 16:00:54.899903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.905 [2024-12-09 16:00:54.899910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.905 [2024-12-09 16:00:54.899916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.905 [2024-12-09 16:00:54.899931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.905 qpair failed and we were unable to recover it. 00:27:59.905 [2024-12-09 16:00:54.909844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.905 [2024-12-09 16:00:54.909899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.905 [2024-12-09 16:00:54.909912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.905 [2024-12-09 16:00:54.909919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.905 [2024-12-09 16:00:54.909925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.905 [2024-12-09 16:00:54.909940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.905 qpair failed and we were unable to recover it. 00:27:59.905 [2024-12-09 16:00:54.919874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.905 [2024-12-09 16:00:54.919933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.905 [2024-12-09 16:00:54.919945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.905 [2024-12-09 16:00:54.919952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.905 [2024-12-09 16:00:54.919958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.905 [2024-12-09 16:00:54.919973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.905 qpair failed and we were unable to recover it. 00:27:59.905 [2024-12-09 16:00:54.929899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.905 [2024-12-09 16:00:54.929949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.905 [2024-12-09 16:00:54.929962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.905 [2024-12-09 16:00:54.929969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.905 [2024-12-09 16:00:54.929975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.905 [2024-12-09 16:00:54.929990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.905 qpair failed and we were unable to recover it. 00:27:59.905 [2024-12-09 16:00:54.939936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.905 [2024-12-09 16:00:54.939992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.905 [2024-12-09 16:00:54.940005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.905 [2024-12-09 16:00:54.940012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.905 [2024-12-09 16:00:54.940019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.905 [2024-12-09 16:00:54.940034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.905 qpair failed and we were unable to recover it. 00:27:59.905 [2024-12-09 16:00:54.949975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.905 [2024-12-09 16:00:54.950031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.905 [2024-12-09 16:00:54.950044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.905 [2024-12-09 16:00:54.950052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.905 [2024-12-09 16:00:54.950058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.905 [2024-12-09 16:00:54.950073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.905 qpair failed and we were unable to recover it. 00:27:59.905 [2024-12-09 16:00:54.959991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.905 [2024-12-09 16:00:54.960044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.905 [2024-12-09 16:00:54.960057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.905 [2024-12-09 16:00:54.960063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.905 [2024-12-09 16:00:54.960070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.905 [2024-12-09 16:00:54.960084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.905 qpair failed and we were unable to recover it. 00:27:59.905 [2024-12-09 16:00:54.970036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.905 [2024-12-09 16:00:54.970106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.905 [2024-12-09 16:00:54.970121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.905 [2024-12-09 16:00:54.970128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.905 [2024-12-09 16:00:54.970134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.905 [2024-12-09 16:00:54.970149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.905 qpair failed and we were unable to recover it. 00:27:59.905 [2024-12-09 16:00:54.980050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.905 [2024-12-09 16:00:54.980101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.905 [2024-12-09 16:00:54.980118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.905 [2024-12-09 16:00:54.980125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.905 [2024-12-09 16:00:54.980131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.905 [2024-12-09 16:00:54.980146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.905 qpair failed and we were unable to recover it. 00:27:59.905 [2024-12-09 16:00:54.990033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.905 [2024-12-09 16:00:54.990116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.906 [2024-12-09 16:00:54.990129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.906 [2024-12-09 16:00:54.990137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.906 [2024-12-09 16:00:54.990143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.906 [2024-12-09 16:00:54.990157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.906 qpair failed and we were unable to recover it. 00:27:59.906 [2024-12-09 16:00:55.000133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.906 [2024-12-09 16:00:55.000207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.906 [2024-12-09 16:00:55.000225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.906 [2024-12-09 16:00:55.000233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.906 [2024-12-09 16:00:55.000239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.906 [2024-12-09 16:00:55.000254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.906 qpair failed and we were unable to recover it. 00:27:59.906 [2024-12-09 16:00:55.010136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.906 [2024-12-09 16:00:55.010192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.906 [2024-12-09 16:00:55.010205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.906 [2024-12-09 16:00:55.010212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.906 [2024-12-09 16:00:55.010222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.906 [2024-12-09 16:00:55.010238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.906 qpair failed and we were unable to recover it. 00:27:59.906 [2024-12-09 16:00:55.020166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.906 [2024-12-09 16:00:55.020229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.906 [2024-12-09 16:00:55.020243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.906 [2024-12-09 16:00:55.020253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.906 [2024-12-09 16:00:55.020259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.906 [2024-12-09 16:00:55.020274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.906 qpair failed and we were unable to recover it. 00:27:59.906 [2024-12-09 16:00:55.030206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.906 [2024-12-09 16:00:55.030270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.906 [2024-12-09 16:00:55.030283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.906 [2024-12-09 16:00:55.030290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.906 [2024-12-09 16:00:55.030296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.906 [2024-12-09 16:00:55.030311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.906 qpair failed and we were unable to recover it. 00:27:59.906 [2024-12-09 16:00:55.040225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.906 [2024-12-09 16:00:55.040282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.906 [2024-12-09 16:00:55.040295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.906 [2024-12-09 16:00:55.040302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.906 [2024-12-09 16:00:55.040308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.906 [2024-12-09 16:00:55.040323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.906 qpair failed and we were unable to recover it. 00:27:59.906 [2024-12-09 16:00:55.050251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.906 [2024-12-09 16:00:55.050304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.906 [2024-12-09 16:00:55.050317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.906 [2024-12-09 16:00:55.050324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.906 [2024-12-09 16:00:55.050330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.906 [2024-12-09 16:00:55.050345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.906 qpair failed and we were unable to recover it. 00:27:59.906 [2024-12-09 16:00:55.060273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.906 [2024-12-09 16:00:55.060380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.906 [2024-12-09 16:00:55.060392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.906 [2024-12-09 16:00:55.060399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.906 [2024-12-09 16:00:55.060405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.906 [2024-12-09 16:00:55.060420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.906 qpair failed and we were unable to recover it. 00:27:59.906 [2024-12-09 16:00:55.070306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.906 [2024-12-09 16:00:55.070375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.906 [2024-12-09 16:00:55.070388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.906 [2024-12-09 16:00:55.070395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.906 [2024-12-09 16:00:55.070401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.906 [2024-12-09 16:00:55.070416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.906 qpair failed and we were unable to recover it. 00:27:59.906 [2024-12-09 16:00:55.080339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.906 [2024-12-09 16:00:55.080393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.906 [2024-12-09 16:00:55.080406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.906 [2024-12-09 16:00:55.080413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.906 [2024-12-09 16:00:55.080419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.906 [2024-12-09 16:00:55.080434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.906 qpair failed and we were unable to recover it. 00:27:59.906 [2024-12-09 16:00:55.090349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.906 [2024-12-09 16:00:55.090415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.906 [2024-12-09 16:00:55.090429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.906 [2024-12-09 16:00:55.090436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.906 [2024-12-09 16:00:55.090442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.906 [2024-12-09 16:00:55.090457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.906 qpair failed and we were unable to recover it. 00:27:59.906 [2024-12-09 16:00:55.100376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.906 [2024-12-09 16:00:55.100429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.906 [2024-12-09 16:00:55.100442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.906 [2024-12-09 16:00:55.100450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.906 [2024-12-09 16:00:55.100456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.906 [2024-12-09 16:00:55.100471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.906 qpair failed and we were unable to recover it. 00:27:59.906 [2024-12-09 16:00:55.110418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.906 [2024-12-09 16:00:55.110482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.906 [2024-12-09 16:00:55.110495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.906 [2024-12-09 16:00:55.110502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.906 [2024-12-09 16:00:55.110508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.906 [2024-12-09 16:00:55.110523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.906 qpair failed and we were unable to recover it. 00:27:59.906 [2024-12-09 16:00:55.120442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.906 [2024-12-09 16:00:55.120494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.906 [2024-12-09 16:00:55.120508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.906 [2024-12-09 16:00:55.120514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.907 [2024-12-09 16:00:55.120521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.907 [2024-12-09 16:00:55.120536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.907 qpair failed and we were unable to recover it. 00:27:59.907 [2024-12-09 16:00:55.130471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.907 [2024-12-09 16:00:55.130529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.907 [2024-12-09 16:00:55.130541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.907 [2024-12-09 16:00:55.130548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.907 [2024-12-09 16:00:55.130555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:27:59.907 [2024-12-09 16:00:55.130569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.907 qpair failed and we were unable to recover it. 00:28:00.165 [2024-12-09 16:00:55.140507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.165 [2024-12-09 16:00:55.140557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.165 [2024-12-09 16:00:55.140570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.165 [2024-12-09 16:00:55.140577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.165 [2024-12-09 16:00:55.140583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.165 [2024-12-09 16:00:55.140597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.165 qpair failed and we were unable to recover it. 00:28:00.165 [2024-12-09 16:00:55.150558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.165 [2024-12-09 16:00:55.150615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.165 [2024-12-09 16:00:55.150627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.165 [2024-12-09 16:00:55.150639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.165 [2024-12-09 16:00:55.150646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.165 [2024-12-09 16:00:55.150661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.165 qpair failed and we were unable to recover it. 00:28:00.165 [2024-12-09 16:00:55.160560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.166 [2024-12-09 16:00:55.160665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.166 [2024-12-09 16:00:55.160678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.166 [2024-12-09 16:00:55.160685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.166 [2024-12-09 16:00:55.160691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.166 [2024-12-09 16:00:55.160706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.166 qpair failed and we were unable to recover it. 00:28:00.166 [2024-12-09 16:00:55.170594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.166 [2024-12-09 16:00:55.170647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.166 [2024-12-09 16:00:55.170660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.166 [2024-12-09 16:00:55.170667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.166 [2024-12-09 16:00:55.170673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.166 [2024-12-09 16:00:55.170687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.166 qpair failed and we were unable to recover it. 00:28:00.166 [2024-12-09 16:00:55.180610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.166 [2024-12-09 16:00:55.180664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.166 [2024-12-09 16:00:55.180677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.166 [2024-12-09 16:00:55.180684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.166 [2024-12-09 16:00:55.180690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.166 [2024-12-09 16:00:55.180705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.166 qpair failed and we were unable to recover it. 00:28:00.166 [2024-12-09 16:00:55.190640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.166 [2024-12-09 16:00:55.190696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.166 [2024-12-09 16:00:55.190708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.166 [2024-12-09 16:00:55.190715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.166 [2024-12-09 16:00:55.190721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.166 [2024-12-09 16:00:55.190738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.166 qpair failed and we were unable to recover it. 00:28:00.166 [2024-12-09 16:00:55.200666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.166 [2024-12-09 16:00:55.200721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.166 [2024-12-09 16:00:55.200735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.166 [2024-12-09 16:00:55.200741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.166 [2024-12-09 16:00:55.200748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.166 [2024-12-09 16:00:55.200763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.166 qpair failed and we were unable to recover it. 00:28:00.166 [2024-12-09 16:00:55.210693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.166 [2024-12-09 16:00:55.210748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.166 [2024-12-09 16:00:55.210760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.166 [2024-12-09 16:00:55.210767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.166 [2024-12-09 16:00:55.210774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.166 [2024-12-09 16:00:55.210788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.166 qpair failed and we were unable to recover it. 00:28:00.166 [2024-12-09 16:00:55.220788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.166 [2024-12-09 16:00:55.220875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.166 [2024-12-09 16:00:55.220888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.166 [2024-12-09 16:00:55.220897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.166 [2024-12-09 16:00:55.220903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.166 [2024-12-09 16:00:55.220918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.166 qpair failed and we were unable to recover it. 00:28:00.166 [2024-12-09 16:00:55.230700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.166 [2024-12-09 16:00:55.230757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.166 [2024-12-09 16:00:55.230772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.166 [2024-12-09 16:00:55.230780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.166 [2024-12-09 16:00:55.230786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.166 [2024-12-09 16:00:55.230801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.166 qpair failed and we were unable to recover it. 00:28:00.166 [2024-12-09 16:00:55.240794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.166 [2024-12-09 16:00:55.240856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.166 [2024-12-09 16:00:55.240869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.166 [2024-12-09 16:00:55.240876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.166 [2024-12-09 16:00:55.240882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.166 [2024-12-09 16:00:55.240897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.166 qpair failed and we were unable to recover it. 00:28:00.166 [2024-12-09 16:00:55.250800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.166 [2024-12-09 16:00:55.250857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.166 [2024-12-09 16:00:55.250871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.166 [2024-12-09 16:00:55.250878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.166 [2024-12-09 16:00:55.250884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.166 [2024-12-09 16:00:55.250901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.166 qpair failed and we were unable to recover it. 00:28:00.166 [2024-12-09 16:00:55.260843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.166 [2024-12-09 16:00:55.260897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.166 [2024-12-09 16:00:55.260910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.166 [2024-12-09 16:00:55.260917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.166 [2024-12-09 16:00:55.260923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.166 [2024-12-09 16:00:55.260938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.166 qpair failed and we were unable to recover it. 00:28:00.166 [2024-12-09 16:00:55.270857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.166 [2024-12-09 16:00:55.270935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.166 [2024-12-09 16:00:55.270948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.166 [2024-12-09 16:00:55.270955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.166 [2024-12-09 16:00:55.270962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.166 [2024-12-09 16:00:55.270976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.166 qpair failed and we were unable to recover it. 00:28:00.166 [2024-12-09 16:00:55.280863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.166 [2024-12-09 16:00:55.280936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.166 [2024-12-09 16:00:55.280952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.166 [2024-12-09 16:00:55.280960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.166 [2024-12-09 16:00:55.280966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.166 [2024-12-09 16:00:55.280980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.166 qpair failed and we were unable to recover it. 00:28:00.166 [2024-12-09 16:00:55.290940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.166 [2024-12-09 16:00:55.290995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.166 [2024-12-09 16:00:55.291008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.166 [2024-12-09 16:00:55.291015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.166 [2024-12-09 16:00:55.291021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.166 [2024-12-09 16:00:55.291036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.166 qpair failed and we were unable to recover it. 00:28:00.166 [2024-12-09 16:00:55.300958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.166 [2024-12-09 16:00:55.301007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.166 [2024-12-09 16:00:55.301020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.166 [2024-12-09 16:00:55.301027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.166 [2024-12-09 16:00:55.301033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.166 [2024-12-09 16:00:55.301048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.166 qpair failed and we were unable to recover it. 00:28:00.166 [2024-12-09 16:00:55.310998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.166 [2024-12-09 16:00:55.311052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.166 [2024-12-09 16:00:55.311065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.166 [2024-12-09 16:00:55.311072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.166 [2024-12-09 16:00:55.311078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.166 [2024-12-09 16:00:55.311092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.166 qpair failed and we were unable to recover it. 00:28:00.166 [2024-12-09 16:00:55.321005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.166 [2024-12-09 16:00:55.321063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.166 [2024-12-09 16:00:55.321075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.166 [2024-12-09 16:00:55.321082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.166 [2024-12-09 16:00:55.321092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.167 [2024-12-09 16:00:55.321106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.167 qpair failed and we were unable to recover it. 00:28:00.167 [2024-12-09 16:00:55.331041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.167 [2024-12-09 16:00:55.331128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.167 [2024-12-09 16:00:55.331142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.167 [2024-12-09 16:00:55.331149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.167 [2024-12-09 16:00:55.331155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.167 [2024-12-09 16:00:55.331169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.167 qpair failed and we were unable to recover it. 00:28:00.167 [2024-12-09 16:00:55.341060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.167 [2024-12-09 16:00:55.341129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.167 [2024-12-09 16:00:55.341144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.167 [2024-12-09 16:00:55.341150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.167 [2024-12-09 16:00:55.341157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.167 [2024-12-09 16:00:55.341172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.167 qpair failed and we were unable to recover it. 00:28:00.167 [2024-12-09 16:00:55.351126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.167 [2024-12-09 16:00:55.351183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.167 [2024-12-09 16:00:55.351196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.167 [2024-12-09 16:00:55.351203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.167 [2024-12-09 16:00:55.351209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.167 [2024-12-09 16:00:55.351228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.167 qpair failed and we were unable to recover it. 00:28:00.167 [2024-12-09 16:00:55.361098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.167 [2024-12-09 16:00:55.361179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.167 [2024-12-09 16:00:55.361192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.167 [2024-12-09 16:00:55.361199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.167 [2024-12-09 16:00:55.361205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.167 [2024-12-09 16:00:55.361226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.167 qpair failed and we were unable to recover it. 00:28:00.167 [2024-12-09 16:00:55.371122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.167 [2024-12-09 16:00:55.371186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.167 [2024-12-09 16:00:55.371198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.167 [2024-12-09 16:00:55.371205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.167 [2024-12-09 16:00:55.371211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.167 [2024-12-09 16:00:55.371230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.167 qpair failed and we were unable to recover it. 00:28:00.167 [2024-12-09 16:00:55.381225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.167 [2024-12-09 16:00:55.381328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.167 [2024-12-09 16:00:55.381344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.167 [2024-12-09 16:00:55.381351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.167 [2024-12-09 16:00:55.381358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.167 [2024-12-09 16:00:55.381373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.167 qpair failed and we were unable to recover it. 00:28:00.167 [2024-12-09 16:00:55.391201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.167 [2024-12-09 16:00:55.391264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.167 [2024-12-09 16:00:55.391277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.167 [2024-12-09 16:00:55.391284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.167 [2024-12-09 16:00:55.391290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.167 [2024-12-09 16:00:55.391305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.167 qpair failed and we were unable to recover it. 00:28:00.427 [2024-12-09 16:00:55.401206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.427 [2024-12-09 16:00:55.401291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.427 [2024-12-09 16:00:55.401305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.427 [2024-12-09 16:00:55.401311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.427 [2024-12-09 16:00:55.401317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.427 [2024-12-09 16:00:55.401332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.427 qpair failed and we were unable to recover it. 00:28:00.427 [2024-12-09 16:00:55.411229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.427 [2024-12-09 16:00:55.411282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.427 [2024-12-09 16:00:55.411298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.427 [2024-12-09 16:00:55.411305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.427 [2024-12-09 16:00:55.411311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.427 [2024-12-09 16:00:55.411325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.427 qpair failed and we were unable to recover it. 00:28:00.427 [2024-12-09 16:00:55.421296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.427 [2024-12-09 16:00:55.421351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.427 [2024-12-09 16:00:55.421363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.427 [2024-12-09 16:00:55.421370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.427 [2024-12-09 16:00:55.421377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.427 [2024-12-09 16:00:55.421392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.427 qpair failed and we were unable to recover it. 00:28:00.427 [2024-12-09 16:00:55.431290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.427 [2024-12-09 16:00:55.431350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.427 [2024-12-09 16:00:55.431363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.427 [2024-12-09 16:00:55.431370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.427 [2024-12-09 16:00:55.431376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.427 [2024-12-09 16:00:55.431392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.427 qpair failed and we were unable to recover it. 00:28:00.427 [2024-12-09 16:00:55.441295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.427 [2024-12-09 16:00:55.441362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.427 [2024-12-09 16:00:55.441374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.427 [2024-12-09 16:00:55.441381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.427 [2024-12-09 16:00:55.441388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.427 [2024-12-09 16:00:55.441402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.427 qpair failed and we were unable to recover it. 00:28:00.427 [2024-12-09 16:00:55.451421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.427 [2024-12-09 16:00:55.451499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.427 [2024-12-09 16:00:55.451512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.427 [2024-12-09 16:00:55.451518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.427 [2024-12-09 16:00:55.451527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.427 [2024-12-09 16:00:55.451541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.427 qpair failed and we were unable to recover it. 00:28:00.427 [2024-12-09 16:00:55.461415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.427 [2024-12-09 16:00:55.461468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.427 [2024-12-09 16:00:55.461482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.427 [2024-12-09 16:00:55.461489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.427 [2024-12-09 16:00:55.461495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.427 [2024-12-09 16:00:55.461509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.427 qpair failed and we were unable to recover it. 00:28:00.427 [2024-12-09 16:00:55.471487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.427 [2024-12-09 16:00:55.471551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.427 [2024-12-09 16:00:55.471563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.427 [2024-12-09 16:00:55.471571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.427 [2024-12-09 16:00:55.471576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.427 [2024-12-09 16:00:55.471591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.427 qpair failed and we were unable to recover it. 00:28:00.427 [2024-12-09 16:00:55.481427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.427 [2024-12-09 16:00:55.481480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.427 [2024-12-09 16:00:55.481493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.427 [2024-12-09 16:00:55.481499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.427 [2024-12-09 16:00:55.481505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.427 [2024-12-09 16:00:55.481521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.427 qpair failed and we were unable to recover it. 00:28:00.427 [2024-12-09 16:00:55.491510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.427 [2024-12-09 16:00:55.491566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.427 [2024-12-09 16:00:55.491579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.427 [2024-12-09 16:00:55.491586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.427 [2024-12-09 16:00:55.491593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.427 [2024-12-09 16:00:55.491607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.427 qpair failed and we were unable to recover it. 00:28:00.427 [2024-12-09 16:00:55.501458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.427 [2024-12-09 16:00:55.501523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.427 [2024-12-09 16:00:55.501536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.428 [2024-12-09 16:00:55.501544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.428 [2024-12-09 16:00:55.501550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.428 [2024-12-09 16:00:55.501564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.428 qpair failed and we were unable to recover it. 00:28:00.428 [2024-12-09 16:00:55.511506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.428 [2024-12-09 16:00:55.511599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.428 [2024-12-09 16:00:55.511612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.428 [2024-12-09 16:00:55.511618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.428 [2024-12-09 16:00:55.511624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.428 [2024-12-09 16:00:55.511639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.428 qpair failed and we were unable to recover it. 00:28:00.428 [2024-12-09 16:00:55.521585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.428 [2024-12-09 16:00:55.521643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.428 [2024-12-09 16:00:55.521656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.428 [2024-12-09 16:00:55.521662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.428 [2024-12-09 16:00:55.521669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.428 [2024-12-09 16:00:55.521683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.428 qpair failed and we were unable to recover it. 00:28:00.428 [2024-12-09 16:00:55.531646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.428 [2024-12-09 16:00:55.531717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.428 [2024-12-09 16:00:55.531730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.428 [2024-12-09 16:00:55.531737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.428 [2024-12-09 16:00:55.531743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.428 [2024-12-09 16:00:55.531757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.428 qpair failed and we were unable to recover it. 00:28:00.428 [2024-12-09 16:00:55.541694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.428 [2024-12-09 16:00:55.541749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.428 [2024-12-09 16:00:55.541765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.428 [2024-12-09 16:00:55.541771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.428 [2024-12-09 16:00:55.541777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.428 [2024-12-09 16:00:55.541792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.428 qpair failed and we were unable to recover it. 00:28:00.428 [2024-12-09 16:00:55.551616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.428 [2024-12-09 16:00:55.551716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.428 [2024-12-09 16:00:55.551729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.428 [2024-12-09 16:00:55.551735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.428 [2024-12-09 16:00:55.551742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.428 [2024-12-09 16:00:55.551756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.428 qpair failed and we were unable to recover it. 00:28:00.428 [2024-12-09 16:00:55.561659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.428 [2024-12-09 16:00:55.561748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.428 [2024-12-09 16:00:55.561761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.428 [2024-12-09 16:00:55.561768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.428 [2024-12-09 16:00:55.561774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.428 [2024-12-09 16:00:55.561789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.428 qpair failed and we were unable to recover it. 00:28:00.428 [2024-12-09 16:00:55.571726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.428 [2024-12-09 16:00:55.571786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.428 [2024-12-09 16:00:55.571800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.428 [2024-12-09 16:00:55.571806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.428 [2024-12-09 16:00:55.571813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.428 [2024-12-09 16:00:55.571827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.428 qpair failed and we were unable to recover it. 00:28:00.428 [2024-12-09 16:00:55.581791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.428 [2024-12-09 16:00:55.581844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.428 [2024-12-09 16:00:55.581857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.428 [2024-12-09 16:00:55.581867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.428 [2024-12-09 16:00:55.581873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.428 [2024-12-09 16:00:55.581888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.428 qpair failed and we were unable to recover it. 00:28:00.428 [2024-12-09 16:00:55.591783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.428 [2024-12-09 16:00:55.591854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.428 [2024-12-09 16:00:55.591868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.428 [2024-12-09 16:00:55.591875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.428 [2024-12-09 16:00:55.591881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.428 [2024-12-09 16:00:55.591896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.428 qpair failed and we were unable to recover it. 00:28:00.428 [2024-12-09 16:00:55.601826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.428 [2024-12-09 16:00:55.601921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.428 [2024-12-09 16:00:55.601934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.428 [2024-12-09 16:00:55.601941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.428 [2024-12-09 16:00:55.601947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.428 [2024-12-09 16:00:55.601962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.428 qpair failed and we were unable to recover it. 00:28:00.428 [2024-12-09 16:00:55.611851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.428 [2024-12-09 16:00:55.611906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.428 [2024-12-09 16:00:55.611919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.428 [2024-12-09 16:00:55.611926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.428 [2024-12-09 16:00:55.611932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.428 [2024-12-09 16:00:55.611947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.428 qpair failed and we were unable to recover it. 00:28:00.428 [2024-12-09 16:00:55.621876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.428 [2024-12-09 16:00:55.621929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.428 [2024-12-09 16:00:55.621943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.428 [2024-12-09 16:00:55.621950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.428 [2024-12-09 16:00:55.621956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.428 [2024-12-09 16:00:55.621970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.428 qpair failed and we were unable to recover it. 00:28:00.428 [2024-12-09 16:00:55.631913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.428 [2024-12-09 16:00:55.631968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.428 [2024-12-09 16:00:55.631981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.428 [2024-12-09 16:00:55.631987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.428 [2024-12-09 16:00:55.631994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.428 [2024-12-09 16:00:55.632009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.429 qpair failed and we were unable to recover it. 00:28:00.429 [2024-12-09 16:00:55.641879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.429 [2024-12-09 16:00:55.641938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.429 [2024-12-09 16:00:55.641951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.429 [2024-12-09 16:00:55.641958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.429 [2024-12-09 16:00:55.641965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.429 [2024-12-09 16:00:55.641979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.429 qpair failed and we were unable to recover it. 00:28:00.429 [2024-12-09 16:00:55.651980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.429 [2024-12-09 16:00:55.652032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.429 [2024-12-09 16:00:55.652045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.429 [2024-12-09 16:00:55.652052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.429 [2024-12-09 16:00:55.652058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.429 [2024-12-09 16:00:55.652072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.429 qpair failed and we were unable to recover it. 00:28:00.689 [2024-12-09 16:00:55.662002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.689 [2024-12-09 16:00:55.662055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.689 [2024-12-09 16:00:55.662069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.689 [2024-12-09 16:00:55.662075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.689 [2024-12-09 16:00:55.662082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.689 [2024-12-09 16:00:55.662095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.689 qpair failed and we were unable to recover it. 00:28:00.689 [2024-12-09 16:00:55.671976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.689 [2024-12-09 16:00:55.672033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.689 [2024-12-09 16:00:55.672046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.689 [2024-12-09 16:00:55.672053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.689 [2024-12-09 16:00:55.672059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.689 [2024-12-09 16:00:55.672073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.689 qpair failed and we were unable to recover it. 00:28:00.689 [2024-12-09 16:00:55.682104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.689 [2024-12-09 16:00:55.682158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.689 [2024-12-09 16:00:55.682171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.689 [2024-12-09 16:00:55.682178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.689 [2024-12-09 16:00:55.682184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.689 [2024-12-09 16:00:55.682199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.689 qpair failed and we were unable to recover it. 00:28:00.689 [2024-12-09 16:00:55.692086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.689 [2024-12-09 16:00:55.692141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.689 [2024-12-09 16:00:55.692155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.689 [2024-12-09 16:00:55.692161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.689 [2024-12-09 16:00:55.692168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.689 [2024-12-09 16:00:55.692183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.689 qpair failed and we were unable to recover it. 00:28:00.689 [2024-12-09 16:00:55.702118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.689 [2024-12-09 16:00:55.702194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.689 [2024-12-09 16:00:55.702208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.689 [2024-12-09 16:00:55.702215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.689 [2024-12-09 16:00:55.702225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.689 [2024-12-09 16:00:55.702240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.689 qpair failed and we were unable to recover it. 00:28:00.689 [2024-12-09 16:00:55.712154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.689 [2024-12-09 16:00:55.712212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.689 [2024-12-09 16:00:55.712228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.689 [2024-12-09 16:00:55.712239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.689 [2024-12-09 16:00:55.712245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.689 [2024-12-09 16:00:55.712260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.689 qpair failed and we were unable to recover it. 00:28:00.689 [2024-12-09 16:00:55.722233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.689 [2024-12-09 16:00:55.722339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.689 [2024-12-09 16:00:55.722352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.689 [2024-12-09 16:00:55.722359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.689 [2024-12-09 16:00:55.722365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.689 [2024-12-09 16:00:55.722380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.689 qpair failed and we were unable to recover it. 00:28:00.689 [2024-12-09 16:00:55.732192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.689 [2024-12-09 16:00:55.732251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.689 [2024-12-09 16:00:55.732264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.689 [2024-12-09 16:00:55.732271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.689 [2024-12-09 16:00:55.732277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.689 [2024-12-09 16:00:55.732292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.689 qpair failed and we were unable to recover it. 00:28:00.689 [2024-12-09 16:00:55.742235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.689 [2024-12-09 16:00:55.742291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.689 [2024-12-09 16:00:55.742303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.689 [2024-12-09 16:00:55.742310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.689 [2024-12-09 16:00:55.742316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.689 [2024-12-09 16:00:55.742330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.689 qpair failed and we were unable to recover it. 00:28:00.689 [2024-12-09 16:00:55.752272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.689 [2024-12-09 16:00:55.752330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.690 [2024-12-09 16:00:55.752342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.690 [2024-12-09 16:00:55.752349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.690 [2024-12-09 16:00:55.752356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.690 [2024-12-09 16:00:55.752373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.690 qpair failed and we were unable to recover it. 00:28:00.690 [2024-12-09 16:00:55.762291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.690 [2024-12-09 16:00:55.762342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.690 [2024-12-09 16:00:55.762355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.690 [2024-12-09 16:00:55.762362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.690 [2024-12-09 16:00:55.762369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.690 [2024-12-09 16:00:55.762384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.690 qpair failed and we were unable to recover it. 00:28:00.690 [2024-12-09 16:00:55.772311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.690 [2024-12-09 16:00:55.772366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.690 [2024-12-09 16:00:55.772379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.690 [2024-12-09 16:00:55.772385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.690 [2024-12-09 16:00:55.772392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.690 [2024-12-09 16:00:55.772407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.690 qpair failed and we were unable to recover it. 00:28:00.690 [2024-12-09 16:00:55.782346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.690 [2024-12-09 16:00:55.782401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.690 [2024-12-09 16:00:55.782414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.690 [2024-12-09 16:00:55.782420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.690 [2024-12-09 16:00:55.782426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.690 [2024-12-09 16:00:55.782441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.690 qpair failed and we were unable to recover it. 00:28:00.690 [2024-12-09 16:00:55.792306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.690 [2024-12-09 16:00:55.792369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.690 [2024-12-09 16:00:55.792382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.690 [2024-12-09 16:00:55.792388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.690 [2024-12-09 16:00:55.792394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.690 [2024-12-09 16:00:55.792409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.690 qpair failed and we were unable to recover it. 00:28:00.690 [2024-12-09 16:00:55.802396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.690 [2024-12-09 16:00:55.802457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.690 [2024-12-09 16:00:55.802470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.690 [2024-12-09 16:00:55.802478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.690 [2024-12-09 16:00:55.802484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.690 [2024-12-09 16:00:55.802498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.690 qpair failed and we were unable to recover it. 00:28:00.690 [2024-12-09 16:00:55.812427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.690 [2024-12-09 16:00:55.812518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.690 [2024-12-09 16:00:55.812530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.690 [2024-12-09 16:00:55.812537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.690 [2024-12-09 16:00:55.812543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.690 [2024-12-09 16:00:55.812557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.690 qpair failed and we were unable to recover it. 00:28:00.690 [2024-12-09 16:00:55.822452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.690 [2024-12-09 16:00:55.822554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.690 [2024-12-09 16:00:55.822568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.690 [2024-12-09 16:00:55.822575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.690 [2024-12-09 16:00:55.822581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.690 [2024-12-09 16:00:55.822596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.690 qpair failed and we were unable to recover it. 00:28:00.690 [2024-12-09 16:00:55.832485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.690 [2024-12-09 16:00:55.832543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.690 [2024-12-09 16:00:55.832556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.690 [2024-12-09 16:00:55.832563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.690 [2024-12-09 16:00:55.832569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.690 [2024-12-09 16:00:55.832584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.690 qpair failed and we were unable to recover it. 00:28:00.690 [2024-12-09 16:00:55.842569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.690 [2024-12-09 16:00:55.842644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.690 [2024-12-09 16:00:55.842660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.690 [2024-12-09 16:00:55.842667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.690 [2024-12-09 16:00:55.842673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.690 [2024-12-09 16:00:55.842688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.690 qpair failed and we were unable to recover it. 00:28:00.690 [2024-12-09 16:00:55.852609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.690 [2024-12-09 16:00:55.852689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.690 [2024-12-09 16:00:55.852702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.690 [2024-12-09 16:00:55.852709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.690 [2024-12-09 16:00:55.852715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.690 [2024-12-09 16:00:55.852729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.690 qpair failed and we were unable to recover it. 00:28:00.690 [2024-12-09 16:00:55.862561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.690 [2024-12-09 16:00:55.862618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.690 [2024-12-09 16:00:55.862630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.690 [2024-12-09 16:00:55.862637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.690 [2024-12-09 16:00:55.862643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.690 [2024-12-09 16:00:55.862658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.690 qpair failed and we were unable to recover it. 00:28:00.690 [2024-12-09 16:00:55.872643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.690 [2024-12-09 16:00:55.872710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.690 [2024-12-09 16:00:55.872724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.690 [2024-12-09 16:00:55.872731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.690 [2024-12-09 16:00:55.872737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.690 [2024-12-09 16:00:55.872752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.690 qpair failed and we were unable to recover it. 00:28:00.690 [2024-12-09 16:00:55.882621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.690 [2024-12-09 16:00:55.882677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.690 [2024-12-09 16:00:55.882690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.691 [2024-12-09 16:00:55.882697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.691 [2024-12-09 16:00:55.882708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.691 [2024-12-09 16:00:55.882723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.691 qpair failed and we were unable to recover it. 00:28:00.691 [2024-12-09 16:00:55.892630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.691 [2024-12-09 16:00:55.892685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.691 [2024-12-09 16:00:55.892698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.691 [2024-12-09 16:00:55.892705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.691 [2024-12-09 16:00:55.892711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.691 [2024-12-09 16:00:55.892725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.691 qpair failed and we were unable to recover it. 00:28:00.691 [2024-12-09 16:00:55.902672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.691 [2024-12-09 16:00:55.902725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.691 [2024-12-09 16:00:55.902738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.691 [2024-12-09 16:00:55.902744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.691 [2024-12-09 16:00:55.902750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.691 [2024-12-09 16:00:55.902765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.691 qpair failed and we were unable to recover it. 00:28:00.691 [2024-12-09 16:00:55.912726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.691 [2024-12-09 16:00:55.912781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.691 [2024-12-09 16:00:55.912794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.691 [2024-12-09 16:00:55.912800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.691 [2024-12-09 16:00:55.912807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.691 [2024-12-09 16:00:55.912822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.691 qpair failed and we were unable to recover it. 00:28:00.951 [2024-12-09 16:00:55.922771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.951 [2024-12-09 16:00:55.922827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.951 [2024-12-09 16:00:55.922840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.951 [2024-12-09 16:00:55.922847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.951 [2024-12-09 16:00:55.922853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.951 [2024-12-09 16:00:55.922867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.951 qpair failed and we were unable to recover it. 00:28:00.951 [2024-12-09 16:00:55.932771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.951 [2024-12-09 16:00:55.932844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.951 [2024-12-09 16:00:55.932857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.951 [2024-12-09 16:00:55.932865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.951 [2024-12-09 16:00:55.932870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.951 [2024-12-09 16:00:55.932885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.951 qpair failed and we were unable to recover it. 00:28:00.951 [2024-12-09 16:00:55.942786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.951 [2024-12-09 16:00:55.942861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.951 [2024-12-09 16:00:55.942874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.951 [2024-12-09 16:00:55.942881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.951 [2024-12-09 16:00:55.942887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.951 [2024-12-09 16:00:55.942901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.951 qpair failed and we were unable to recover it. 00:28:00.951 [2024-12-09 16:00:55.952824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.951 [2024-12-09 16:00:55.952886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.951 [2024-12-09 16:00:55.952899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.951 [2024-12-09 16:00:55.952908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.951 [2024-12-09 16:00:55.952914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.951 [2024-12-09 16:00:55.952929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.951 qpair failed and we were unable to recover it. 00:28:00.951 [2024-12-09 16:00:55.962873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.951 [2024-12-09 16:00:55.962956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.951 [2024-12-09 16:00:55.962970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.951 [2024-12-09 16:00:55.962977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.951 [2024-12-09 16:00:55.962983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.951 [2024-12-09 16:00:55.962998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.951 qpair failed and we were unable to recover it. 00:28:00.951 [2024-12-09 16:00:55.972871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.951 [2024-12-09 16:00:55.972922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.951 [2024-12-09 16:00:55.972938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.951 [2024-12-09 16:00:55.972945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.951 [2024-12-09 16:00:55.972952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.951 [2024-12-09 16:00:55.972966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.951 qpair failed and we were unable to recover it. 00:28:00.951 [2024-12-09 16:00:55.982840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.951 [2024-12-09 16:00:55.982893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.951 [2024-12-09 16:00:55.982906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.951 [2024-12-09 16:00:55.982913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.951 [2024-12-09 16:00:55.982919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.951 [2024-12-09 16:00:55.982934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.951 qpair failed and we were unable to recover it. 00:28:00.951 [2024-12-09 16:00:55.992937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.951 [2024-12-09 16:00:55.993003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.951 [2024-12-09 16:00:55.993015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.951 [2024-12-09 16:00:55.993022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.951 [2024-12-09 16:00:55.993029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.951 [2024-12-09 16:00:55.993044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.951 qpair failed and we were unable to recover it. 00:28:00.951 [2024-12-09 16:00:56.002964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.951 [2024-12-09 16:00:56.003020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.951 [2024-12-09 16:00:56.003034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.951 [2024-12-09 16:00:56.003040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.951 [2024-12-09 16:00:56.003047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.951 [2024-12-09 16:00:56.003061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.951 qpair failed and we were unable to recover it. 00:28:00.951 [2024-12-09 16:00:56.012991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.951 [2024-12-09 16:00:56.013045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.951 [2024-12-09 16:00:56.013058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.951 [2024-12-09 16:00:56.013065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.951 [2024-12-09 16:00:56.013074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.951 [2024-12-09 16:00:56.013089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.951 qpair failed and we were unable to recover it. 00:28:00.951 [2024-12-09 16:00:56.023041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.951 [2024-12-09 16:00:56.023101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.951 [2024-12-09 16:00:56.023113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.952 [2024-12-09 16:00:56.023120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.952 [2024-12-09 16:00:56.023126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.952 [2024-12-09 16:00:56.023141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.952 qpair failed and we were unable to recover it. 00:28:00.952 [2024-12-09 16:00:56.033049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.952 [2024-12-09 16:00:56.033103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.952 [2024-12-09 16:00:56.033116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.952 [2024-12-09 16:00:56.033123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.952 [2024-12-09 16:00:56.033130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.952 [2024-12-09 16:00:56.033144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.952 qpair failed and we were unable to recover it. 00:28:00.952 [2024-12-09 16:00:56.043087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.952 [2024-12-09 16:00:56.043150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.952 [2024-12-09 16:00:56.043163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.952 [2024-12-09 16:00:56.043171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.952 [2024-12-09 16:00:56.043177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.952 [2024-12-09 16:00:56.043192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.952 qpair failed and we were unable to recover it. 00:28:00.952 [2024-12-09 16:00:56.053109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.952 [2024-12-09 16:00:56.053180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.952 [2024-12-09 16:00:56.053193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.952 [2024-12-09 16:00:56.053199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.952 [2024-12-09 16:00:56.053205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.952 [2024-12-09 16:00:56.053225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.952 qpair failed and we were unable to recover it. 00:28:00.952 [2024-12-09 16:00:56.063155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.952 [2024-12-09 16:00:56.063209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.952 [2024-12-09 16:00:56.063225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.952 [2024-12-09 16:00:56.063232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.952 [2024-12-09 16:00:56.063238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.952 [2024-12-09 16:00:56.063254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.952 qpair failed and we were unable to recover it. 00:28:00.952 [2024-12-09 16:00:56.073168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.952 [2024-12-09 16:00:56.073236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.952 [2024-12-09 16:00:56.073249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.952 [2024-12-09 16:00:56.073256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.952 [2024-12-09 16:00:56.073262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.952 [2024-12-09 16:00:56.073277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.952 qpair failed and we were unable to recover it. 00:28:00.952 [2024-12-09 16:00:56.083195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.952 [2024-12-09 16:00:56.083275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.952 [2024-12-09 16:00:56.083289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.952 [2024-12-09 16:00:56.083297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.952 [2024-12-09 16:00:56.083303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.952 [2024-12-09 16:00:56.083318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.952 qpair failed and we were unable to recover it. 00:28:00.952 [2024-12-09 16:00:56.093257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.952 [2024-12-09 16:00:56.093327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.952 [2024-12-09 16:00:56.093340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.952 [2024-12-09 16:00:56.093347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.952 [2024-12-09 16:00:56.093353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.952 [2024-12-09 16:00:56.093368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.952 qpair failed and we were unable to recover it. 00:28:00.952 [2024-12-09 16:00:56.103255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.952 [2024-12-09 16:00:56.103309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.952 [2024-12-09 16:00:56.103326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.952 [2024-12-09 16:00:56.103333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.952 [2024-12-09 16:00:56.103339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.952 [2024-12-09 16:00:56.103354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.952 qpair failed and we were unable to recover it. 00:28:00.952 [2024-12-09 16:00:56.113291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.952 [2024-12-09 16:00:56.113349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.952 [2024-12-09 16:00:56.113362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.952 [2024-12-09 16:00:56.113369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.952 [2024-12-09 16:00:56.113375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.952 [2024-12-09 16:00:56.113390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.952 qpair failed and we were unable to recover it. 00:28:00.952 [2024-12-09 16:00:56.123331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.952 [2024-12-09 16:00:56.123389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.952 [2024-12-09 16:00:56.123403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.952 [2024-12-09 16:00:56.123410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.952 [2024-12-09 16:00:56.123417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.952 [2024-12-09 16:00:56.123432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.952 qpair failed and we were unable to recover it. 00:28:00.952 [2024-12-09 16:00:56.133332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.952 [2024-12-09 16:00:56.133385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.952 [2024-12-09 16:00:56.133398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.952 [2024-12-09 16:00:56.133405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.952 [2024-12-09 16:00:56.133412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.952 [2024-12-09 16:00:56.133426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.952 qpair failed and we were unable to recover it. 00:28:00.952 [2024-12-09 16:00:56.143395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.952 [2024-12-09 16:00:56.143468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.952 [2024-12-09 16:00:56.143481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.952 [2024-12-09 16:00:56.143491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.952 [2024-12-09 16:00:56.143497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.952 [2024-12-09 16:00:56.143511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.952 qpair failed and we were unable to recover it. 00:28:00.952 [2024-12-09 16:00:56.153402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.952 [2024-12-09 16:00:56.153458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.952 [2024-12-09 16:00:56.153471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.952 [2024-12-09 16:00:56.153478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.952 [2024-12-09 16:00:56.153485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.953 [2024-12-09 16:00:56.153500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.953 qpair failed and we were unable to recover it. 00:28:00.953 [2024-12-09 16:00:56.163493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.953 [2024-12-09 16:00:56.163549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.953 [2024-12-09 16:00:56.163562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.953 [2024-12-09 16:00:56.163570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.953 [2024-12-09 16:00:56.163576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.953 [2024-12-09 16:00:56.163590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.953 qpair failed and we were unable to recover it. 00:28:00.953 [2024-12-09 16:00:56.173438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.953 [2024-12-09 16:00:56.173492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.953 [2024-12-09 16:00:56.173506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.953 [2024-12-09 16:00:56.173512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.953 [2024-12-09 16:00:56.173519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:00.953 [2024-12-09 16:00:56.173533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.953 qpair failed and we were unable to recover it. 00:28:01.212 [2024-12-09 16:00:56.183472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.212 [2024-12-09 16:00:56.183537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.212 [2024-12-09 16:00:56.183549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.212 [2024-12-09 16:00:56.183557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.212 [2024-12-09 16:00:56.183563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:01.212 [2024-12-09 16:00:56.183577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.212 qpair failed and we were unable to recover it. 00:28:01.212 [2024-12-09 16:00:56.193547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.213 [2024-12-09 16:00:56.193601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.213 [2024-12-09 16:00:56.193614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.213 [2024-12-09 16:00:56.193621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.213 [2024-12-09 16:00:56.193628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:01.213 [2024-12-09 16:00:56.193643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.213 qpair failed and we were unable to recover it. 00:28:01.213 [2024-12-09 16:00:56.203458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.213 [2024-12-09 16:00:56.203515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.213 [2024-12-09 16:00:56.203529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.213 [2024-12-09 16:00:56.203536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.213 [2024-12-09 16:00:56.203542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:01.213 [2024-12-09 16:00:56.203556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.213 qpair failed and we were unable to recover it. 00:28:01.213 [2024-12-09 16:00:56.213573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.213 [2024-12-09 16:00:56.213625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.213 [2024-12-09 16:00:56.213638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.213 [2024-12-09 16:00:56.213644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.213 [2024-12-09 16:00:56.213651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:01.213 [2024-12-09 16:00:56.213666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.213 qpair failed and we were unable to recover it. 00:28:01.213 [2024-12-09 16:00:56.223588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.213 [2024-12-09 16:00:56.223689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.213 [2024-12-09 16:00:56.223702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.213 [2024-12-09 16:00:56.223709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.213 [2024-12-09 16:00:56.223715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:01.213 [2024-12-09 16:00:56.223730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.213 qpair failed and we were unable to recover it. 00:28:01.213 [2024-12-09 16:00:56.233638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.213 [2024-12-09 16:00:56.233716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.213 [2024-12-09 16:00:56.233729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.213 [2024-12-09 16:00:56.233736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.213 [2024-12-09 16:00:56.233742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:01.213 [2024-12-09 16:00:56.233756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.213 qpair failed and we were unable to recover it. 00:28:01.213 [2024-12-09 16:00:56.243695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.213 [2024-12-09 16:00:56.243785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.213 [2024-12-09 16:00:56.243798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.213 [2024-12-09 16:00:56.243804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.213 [2024-12-09 16:00:56.243811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:01.213 [2024-12-09 16:00:56.243826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.213 qpair failed and we were unable to recover it. 00:28:01.213 [2024-12-09 16:00:56.253712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.213 [2024-12-09 16:00:56.253771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.213 [2024-12-09 16:00:56.253785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.213 [2024-12-09 16:00:56.253792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.213 [2024-12-09 16:00:56.253798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:01.213 [2024-12-09 16:00:56.253813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.213 qpair failed and we were unable to recover it. 00:28:01.213 [2024-12-09 16:00:56.263701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.213 [2024-12-09 16:00:56.263752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.213 [2024-12-09 16:00:56.263765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.213 [2024-12-09 16:00:56.263772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.213 [2024-12-09 16:00:56.263779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:01.213 [2024-12-09 16:00:56.263793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.213 qpair failed and we were unable to recover it. 00:28:01.213 [2024-12-09 16:00:56.273738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.213 [2024-12-09 16:00:56.273794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.213 [2024-12-09 16:00:56.273807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.213 [2024-12-09 16:00:56.273817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.213 [2024-12-09 16:00:56.273823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:01.213 [2024-12-09 16:00:56.273838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.213 qpair failed and we were unable to recover it. 00:28:01.213 [2024-12-09 16:00:56.283803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.213 [2024-12-09 16:00:56.283858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.213 [2024-12-09 16:00:56.283871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.213 [2024-12-09 16:00:56.283878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.213 [2024-12-09 16:00:56.283884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:01.213 [2024-12-09 16:00:56.283898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.213 qpair failed and we were unable to recover it. 00:28:01.213 [2024-12-09 16:00:56.293764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.213 [2024-12-09 16:00:56.293831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.213 [2024-12-09 16:00:56.293845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.213 [2024-12-09 16:00:56.293853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.213 [2024-12-09 16:00:56.293859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:01.213 [2024-12-09 16:00:56.293874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.213 qpair failed and we were unable to recover it. 00:28:01.213 [2024-12-09 16:00:56.303819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.213 [2024-12-09 16:00:56.303873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.213 [2024-12-09 16:00:56.303885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.213 [2024-12-09 16:00:56.303892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.213 [2024-12-09 16:00:56.303899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:01.213 [2024-12-09 16:00:56.303913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.213 qpair failed and we were unable to recover it. 00:28:01.213 [2024-12-09 16:00:56.313913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.213 [2024-12-09 16:00:56.314011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.213 [2024-12-09 16:00:56.314024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.213 [2024-12-09 16:00:56.314031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.213 [2024-12-09 16:00:56.314037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:01.213 [2024-12-09 16:00:56.314054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.213 qpair failed and we were unable to recover it. 00:28:01.213 [2024-12-09 16:00:56.323859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.213 [2024-12-09 16:00:56.323912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.214 [2024-12-09 16:00:56.323926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.214 [2024-12-09 16:00:56.323933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.214 [2024-12-09 16:00:56.323939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:01.214 [2024-12-09 16:00:56.323953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-09 16:00:56.333909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.214 [2024-12-09 16:00:56.333963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.214 [2024-12-09 16:00:56.333977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.214 [2024-12-09 16:00:56.333984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.214 [2024-12-09 16:00:56.333990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdedc000b90 00:28:01.214 [2024-12-09 16:00:56.334004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-09 16:00:56.343957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.214 [2024-12-09 16:00:56.344054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.214 [2024-12-09 16:00:56.344109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.214 [2024-12-09 16:00:56.344134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.214 [2024-12-09 16:00:56.344154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fded8000b90 00:28:01.214 [2024-12-09 16:00:56.344205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-09 16:00:56.353973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.214 [2024-12-09 16:00:56.354110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.214 [2024-12-09 16:00:56.354137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.214 [2024-12-09 16:00:56.354151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.214 [2024-12-09 16:00:56.354165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fded8000b90 00:28:01.214 [2024-12-09 16:00:56.354196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-09 16:00:56.364026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.214 [2024-12-09 16:00:56.364153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.214 [2024-12-09 16:00:56.364208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.214 [2024-12-09 16:00:56.364245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.214 [2024-12-09 16:00:56.364267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdee4000b90 00:28:01.214 [2024-12-09 16:00:56.364317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-09 16:00:56.374045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.214 [2024-12-09 16:00:56.374147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.214 [2024-12-09 16:00:56.374201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.214 [2024-12-09 16:00:56.374238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.214 [2024-12-09 16:00:56.374261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xef2500 00:28:01.214 [2024-12-09 16:00:56.374308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-09 16:00:56.384073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.214 [2024-12-09 16:00:56.384155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.214 [2024-12-09 16:00:56.384183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.214 [2024-12-09 16:00:56.384197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.214 [2024-12-09 16:00:56.384210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xef2500 00:28:01.214 [2024-12-09 16:00:56.384250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.214 [2024-12-09 16:00:56.384427] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:28:01.214 A controller has encountered a failure and is being reset. 00:28:01.214 [2024-12-09 16:00:56.394117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.214 [2024-12-09 16:00:56.394258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.214 [2024-12-09 16:00:56.394299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.214 [2024-12-09 16:00:56.394320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.214 [2024-12-09 16:00:56.394340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdee4000b90 00:28:01.214 [2024-12-09 16:00:56.394388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:01.214 qpair failed and we were unable to recover it. 00:28:01.473 Controller properly reset. 00:28:01.473 Initializing NVMe Controllers 00:28:01.473 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:01.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:01.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:01.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:01.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:01.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:01.473 Initialization complete. Launching workers. 00:28:01.473 Starting thread on core 1 00:28:01.473 Starting thread on core 2 00:28:01.473 Starting thread on core 3 00:28:01.473 Starting thread on core 0 00:28:01.473 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:01.473 00:28:01.473 real 0m10.825s 00:28:01.473 user 0m19.240s 00:28:01.473 sys 0m4.737s 00:28:01.473 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:01.473 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.473 ************************************ 00:28:01.473 END TEST nvmf_target_disconnect_tc2 00:28:01.473 ************************************ 00:28:01.473 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:01.473 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:01.473 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:01.473 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:01.473 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:01.473 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:01.474 rmmod nvme_tcp 00:28:01.474 rmmod nvme_fabrics 00:28:01.474 rmmod nvme_keyring 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2158578 ']' 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2158578 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2158578 ']' 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2158578 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2158578 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2158578' 00:28:01.474 killing process with pid 2158578 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2158578 00:28:01.474 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2158578 00:28:01.733 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:01.733 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:01.733 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:01.733 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:01.733 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:01.733 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:01.733 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:01.733 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:01.733 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:01.733 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.733 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:01.733 16:00:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.270 16:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:04.270 00:28:04.270 real 0m19.539s 00:28:04.270 user 0m47.155s 00:28:04.270 sys 0m9.626s 00:28:04.270 16:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.270 16:00:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:04.270 ************************************ 00:28:04.270 END TEST nvmf_target_disconnect 00:28:04.270 ************************************ 00:28:04.270 16:00:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:04.270 00:28:04.270 real 5m50.686s 00:28:04.270 user 10m27.796s 00:28:04.270 sys 1m58.193s 00:28:04.270 16:00:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.270 16:00:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.270 ************************************ 00:28:04.270 END TEST nvmf_host 00:28:04.270 ************************************ 00:28:04.270 16:00:58 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:04.270 16:00:58 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:04.270 16:00:58 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:04.270 16:00:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:04.270 16:00:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.270 16:00:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:04.270 ************************************ 00:28:04.270 START TEST nvmf_target_core_interrupt_mode 00:28:04.270 ************************************ 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:04.270 * Looking for test storage... 00:28:04.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:04.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.270 --rc genhtml_branch_coverage=1 00:28:04.270 --rc genhtml_function_coverage=1 00:28:04.270 --rc genhtml_legend=1 00:28:04.270 --rc geninfo_all_blocks=1 00:28:04.270 --rc geninfo_unexecuted_blocks=1 00:28:04.270 00:28:04.270 ' 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:04.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.270 --rc genhtml_branch_coverage=1 00:28:04.270 --rc genhtml_function_coverage=1 00:28:04.270 --rc genhtml_legend=1 00:28:04.270 --rc geninfo_all_blocks=1 00:28:04.270 --rc geninfo_unexecuted_blocks=1 00:28:04.270 00:28:04.270 ' 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:04.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.270 --rc genhtml_branch_coverage=1 00:28:04.270 --rc genhtml_function_coverage=1 00:28:04.270 --rc genhtml_legend=1 00:28:04.270 --rc geninfo_all_blocks=1 00:28:04.270 --rc geninfo_unexecuted_blocks=1 00:28:04.270 00:28:04.270 ' 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:04.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.270 --rc genhtml_branch_coverage=1 00:28:04.270 --rc genhtml_function_coverage=1 00:28:04.270 --rc genhtml_legend=1 00:28:04.270 --rc geninfo_all_blocks=1 00:28:04.270 --rc geninfo_unexecuted_blocks=1 00:28:04.270 00:28:04.270 ' 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.270 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:04.271 ************************************ 00:28:04.271 START TEST nvmf_abort 00:28:04.271 ************************************ 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:04.271 * Looking for test storage... 00:28:04.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:04.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.271 --rc genhtml_branch_coverage=1 00:28:04.271 --rc genhtml_function_coverage=1 00:28:04.271 --rc genhtml_legend=1 00:28:04.271 --rc geninfo_all_blocks=1 00:28:04.271 --rc geninfo_unexecuted_blocks=1 00:28:04.271 00:28:04.271 ' 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:04.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.271 --rc genhtml_branch_coverage=1 00:28:04.271 --rc genhtml_function_coverage=1 00:28:04.271 --rc genhtml_legend=1 00:28:04.271 --rc geninfo_all_blocks=1 00:28:04.271 --rc geninfo_unexecuted_blocks=1 00:28:04.271 00:28:04.271 ' 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:04.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.271 --rc genhtml_branch_coverage=1 00:28:04.271 --rc genhtml_function_coverage=1 00:28:04.271 --rc genhtml_legend=1 00:28:04.271 --rc geninfo_all_blocks=1 00:28:04.271 --rc geninfo_unexecuted_blocks=1 00:28:04.271 00:28:04.271 ' 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:04.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.271 --rc genhtml_branch_coverage=1 00:28:04.271 --rc genhtml_function_coverage=1 00:28:04.271 --rc genhtml_legend=1 00:28:04.271 --rc geninfo_all_blocks=1 00:28:04.271 --rc geninfo_unexecuted_blocks=1 00:28:04.271 00:28:04.271 ' 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.271 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:04.272 16:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:10.841 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:10.841 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:10.841 Found net devices under 0000:af:00.0: cvl_0_0 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:10.841 Found net devices under 0000:af:00.1: cvl_0_1 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.841 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:10.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:28:10.842 00:28:10.842 --- 10.0.0.2 ping statistics --- 00:28:10.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.842 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:28:10.842 00:28:10.842 --- 10.0.0.1 ping statistics --- 00:28:10.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.842 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2163086 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2163086 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2163086 ']' 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.842 [2024-12-09 16:01:05.375267] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:10.842 [2024-12-09 16:01:05.376192] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:28:10.842 [2024-12-09 16:01:05.376237] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.842 [2024-12-09 16:01:05.454014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:10.842 [2024-12-09 16:01:05.494030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.842 [2024-12-09 16:01:05.494065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.842 [2024-12-09 16:01:05.494072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.842 [2024-12-09 16:01:05.494078] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.842 [2024-12-09 16:01:05.494083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.842 [2024-12-09 16:01:05.495410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.842 [2024-12-09 16:01:05.495518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.842 [2024-12-09 16:01:05.495519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:10.842 [2024-12-09 16:01:05.563442] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:10.842 [2024-12-09 16:01:05.564234] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:10.842 [2024-12-09 16:01:05.564497] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:10.842 [2024-12-09 16:01:05.564604] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.842 [2024-12-09 16:01:05.632337] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.842 Malloc0 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.842 Delay0 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.842 [2024-12-09 16:01:05.724213] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.842 16:01:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:10.842 [2024-12-09 16:01:05.847011] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:12.815 Initializing NVMe Controllers 00:28:12.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:12.815 controller IO queue size 128 less than required 00:28:12.815 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:12.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:12.815 Initialization complete. Launching workers. 00:28:12.815 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38145 00:28:12.815 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38202, failed to submit 66 00:28:12.815 success 38145, unsuccessful 57, failed 0 00:28:12.815 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:12.815 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.815 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.815 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.815 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:12.815 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:12.815 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:12.815 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:12.815 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:12.815 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:12.815 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:12.815 16:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:12.815 rmmod nvme_tcp 00:28:12.815 rmmod nvme_fabrics 00:28:12.815 rmmod nvme_keyring 00:28:12.815 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:12.815 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:12.815 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:12.815 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2163086 ']' 00:28:12.815 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2163086 00:28:12.815 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2163086 ']' 00:28:12.815 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2163086 00:28:12.815 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:12.815 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:12.815 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2163086 00:28:13.074 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:13.074 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:13.074 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2163086' 00:28:13.074 killing process with pid 2163086 00:28:13.074 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2163086 00:28:13.074 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2163086 00:28:13.074 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:13.074 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:13.074 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:13.074 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:13.075 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:13.075 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:13.075 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:13.075 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:13.075 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:13.075 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.075 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.075 16:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:15.611 00:28:15.611 real 0m11.090s 00:28:15.611 user 0m10.476s 00:28:15.611 sys 0m5.619s 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:15.611 ************************************ 00:28:15.611 END TEST nvmf_abort 00:28:15.611 ************************************ 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:15.611 ************************************ 00:28:15.611 START TEST nvmf_ns_hotplug_stress 00:28:15.611 ************************************ 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:15.611 * Looking for test storage... 00:28:15.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:15.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.611 --rc genhtml_branch_coverage=1 00:28:15.611 --rc genhtml_function_coverage=1 00:28:15.611 --rc genhtml_legend=1 00:28:15.611 --rc geninfo_all_blocks=1 00:28:15.611 --rc geninfo_unexecuted_blocks=1 00:28:15.611 00:28:15.611 ' 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:15.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.611 --rc genhtml_branch_coverage=1 00:28:15.611 --rc genhtml_function_coverage=1 00:28:15.611 --rc genhtml_legend=1 00:28:15.611 --rc geninfo_all_blocks=1 00:28:15.611 --rc geninfo_unexecuted_blocks=1 00:28:15.611 00:28:15.611 ' 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:15.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.611 --rc genhtml_branch_coverage=1 00:28:15.611 --rc genhtml_function_coverage=1 00:28:15.611 --rc genhtml_legend=1 00:28:15.611 --rc geninfo_all_blocks=1 00:28:15.611 --rc geninfo_unexecuted_blocks=1 00:28:15.611 00:28:15.611 ' 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:15.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.611 --rc genhtml_branch_coverage=1 00:28:15.611 --rc genhtml_function_coverage=1 00:28:15.611 --rc genhtml_legend=1 00:28:15.611 --rc geninfo_all_blocks=1 00:28:15.611 --rc geninfo_unexecuted_blocks=1 00:28:15.611 00:28:15.611 ' 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.611 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:15.612 16:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:22.183 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:22.183 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.183 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:22.184 Found net devices under 0000:af:00.0: cvl_0_0 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:22.184 Found net devices under 0000:af:00.1: cvl_0_1 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:22.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:28:22.184 00:28:22.184 --- 10.0.0.2 ping statistics --- 00:28:22.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.184 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:28:22.184 00:28:22.184 --- 10.0.0.1 ping statistics --- 00:28:22.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.184 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2167035 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2167035 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2167035 ']' 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.184 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:22.184 [2024-12-09 16:01:16.489762] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:22.184 [2024-12-09 16:01:16.490725] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:28:22.184 [2024-12-09 16:01:16.490764] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.184 [2024-12-09 16:01:16.569376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:22.184 [2024-12-09 16:01:16.610101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.184 [2024-12-09 16:01:16.610135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.184 [2024-12-09 16:01:16.610142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.184 [2024-12-09 16:01:16.610148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.184 [2024-12-09 16:01:16.610153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.184 [2024-12-09 16:01:16.611555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:22.184 [2024-12-09 16:01:16.611659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.184 [2024-12-09 16:01:16.611660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:22.185 [2024-12-09 16:01:16.679188] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:22.185 [2024-12-09 16:01:16.680013] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:22.185 [2024-12-09 16:01:16.680248] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:22.185 [2024-12-09 16:01:16.680333] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:22.185 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.185 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:22.185 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:22.185 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:22.185 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:22.185 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.185 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:22.185 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:22.185 [2024-12-09 16:01:16.912438] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.185 16:01:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:22.185 16:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:22.185 [2024-12-09 16:01:17.308816] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.185 16:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:22.444 16:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:22.703 Malloc0 00:28:22.703 16:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:22.703 Delay0 00:28:22.703 16:01:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.961 16:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:23.220 NULL1 00:28:23.220 16:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:23.478 16:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2167398 00:28:23.478 16:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:23.478 16:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:23.478 16:01:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.414 Read completed with error (sct=0, sc=11) 00:28:24.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.414 16:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.672 16:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:24.672 16:01:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:24.930 true 00:28:24.930 16:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:24.930 16:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.864 16:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.864 16:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:25.864 16:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:26.123 true 00:28:26.123 16:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:26.123 16:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.382 16:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.640 16:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:26.640 16:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:26.640 true 00:28:26.899 16:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:26.899 16:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.835 16:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:27.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:28.093 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:28.093 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:28.093 true 00:28:28.093 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:28.093 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.352 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.610 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:28.610 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:28.869 true 00:28:28.869 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:28.869 16:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.245 16:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:30.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.245 16:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:30.245 16:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:30.245 true 00:28:30.503 16:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:30.503 16:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:31.328 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:31.328 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:31.328 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:31.586 true 00:28:31.586 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:31.586 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.845 16:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:32.104 16:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:32.104 16:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:32.104 true 00:28:32.104 16:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:32.104 16:01:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.479 16:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:33.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.479 16:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:33.479 16:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:33.738 true 00:28:33.738 16:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:33.738 16:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.674 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:34.674 16:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.674 16:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:34.674 16:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:34.932 true 00:28:34.932 16:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:34.932 16:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.191 16:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:35.449 16:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:35.449 16:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:35.708 true 00:28:35.708 16:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:35.708 16:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:36.644 16:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:36.902 16:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:36.902 16:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:36.902 true 00:28:36.902 16:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:36.902 16:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:37.161 16:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.419 16:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:37.419 16:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:37.677 true 00:28:37.677 16:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:37.677 16:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:38.613 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:38.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:38.871 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:38.871 16:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:39.130 true 00:28:39.130 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:39.130 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:39.388 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:39.388 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:39.388 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:39.647 true 00:28:39.647 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:39.647 16:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.022 16:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.023 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.023 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.023 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.023 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.023 16:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:41.023 16:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:41.281 true 00:28:41.281 16:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:41.281 16:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.216 16:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:42.216 16:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:42.216 16:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:42.475 true 00:28:42.475 16:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:42.475 16:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.475 16:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:42.733 16:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:42.733 16:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:42.992 true 00:28:42.992 16:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:42.992 16:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.927 16:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:44.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:44.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:44.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:44.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:44.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:44.185 16:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:44.185 16:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:44.443 true 00:28:44.443 16:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:44.443 16:01:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:45.376 16:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.376 16:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:45.376 16:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:45.634 true 00:28:45.634 16:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:45.634 16:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.892 16:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:46.150 16:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:46.150 16:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:46.150 true 00:28:46.408 16:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:46.408 16:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:47.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.342 16:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:47.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.600 16:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:47.600 16:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:47.859 true 00:28:47.859 16:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:47.859 16:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.794 16:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:48.794 16:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:48.794 16:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:49.053 true 00:28:49.053 16:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:49.053 16:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.311 16:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:49.570 16:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:49.570 16:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:49.570 true 00:28:49.570 16:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:49.570 16:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.951 16:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:50.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.951 16:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:50.951 16:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:51.210 true 00:28:51.210 16:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:51.210 16:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:52.145 16:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:52.145 16:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:52.145 16:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:52.404 true 00:28:52.404 16:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:52.404 16:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:52.662 16:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:52.921 16:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:52.921 16:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:52.921 true 00:28:52.921 16:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:52.921 16:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:54.297 Initializing NVMe Controllers 00:28:54.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:54.297 Controller IO queue size 128, less than required. 00:28:54.297 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:54.297 Controller IO queue size 128, less than required. 00:28:54.297 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:54.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:54.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:54.297 Initialization complete. Launching workers. 00:28:54.297 ======================================================== 00:28:54.297 Latency(us) 00:28:54.297 Device Information : IOPS MiB/s Average min max 00:28:54.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1922.30 0.94 45479.38 2253.14 1019977.95 00:28:54.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17964.91 8.77 7124.77 1747.00 333964.51 00:28:54.297 ======================================================== 00:28:54.297 Total : 19887.21 9.71 10832.13 1747.00 1019977.95 00:28:54.297 00:28:54.297 16:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:54.297 16:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:54.297 16:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:54.297 true 00:28:54.297 16:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2167398 00:28:54.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2167398) - No such process 00:28:54.297 16:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2167398 00:28:54.297 16:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:54.556 16:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:54.815 16:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:54.815 16:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:54.815 16:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:54.815 16:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:54.815 16:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:55.074 null0 00:28:55.074 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:55.074 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:55.074 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:55.074 null1 00:28:55.074 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:55.074 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:55.074 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:55.333 null2 00:28:55.333 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:55.333 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:55.333 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:55.592 null3 00:28:55.592 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:55.592 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:55.592 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:55.592 null4 00:28:55.851 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:55.851 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:55.851 16:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:55.851 null5 00:28:55.851 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:55.851 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:55.851 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:56.110 null6 00:28:56.110 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:56.110 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:56.110 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:56.369 null7 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.369 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2172748 2172752 2172753 2172757 2172760 2172763 2172764 2172767 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:56.370 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:56.628 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:56.628 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:56.628 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.629 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:56.888 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:56.888 16:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:56.888 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:56.888 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:56.888 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:56.888 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:56.888 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:56.888 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.147 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.407 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:57.667 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:57.667 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.667 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:57.667 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:57.667 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:57.667 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:57.667 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:57.667 16:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.926 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:57.927 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.927 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.927 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:58.186 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:58.186 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:58.186 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:58.186 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:58.186 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:58.186 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:58.186 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.186 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:58.445 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.445 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.445 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:58.445 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.445 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.445 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:58.445 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.445 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.445 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:58.445 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.445 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:58.446 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.705 16:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:58.964 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:58.964 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:58.964 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:58.964 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.964 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:58.964 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:58.964 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:58.964 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:59.224 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.484 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:59.744 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:59.744 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:59.744 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:59.744 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:59.744 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:59.744 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:59.744 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:59.744 16:01:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.003 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:00.262 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:00.262 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:00.262 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:00.262 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.262 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:00.262 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:00.262 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:00.263 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:00.263 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.263 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.263 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.263 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.263 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.263 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:00.522 rmmod nvme_tcp 00:29:00.522 rmmod nvme_fabrics 00:29:00.522 rmmod nvme_keyring 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2167035 ']' 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2167035 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2167035 ']' 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2167035 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2167035 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2167035' 00:29:00.522 killing process with pid 2167035 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2167035 00:29:00.522 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2167035 00:29:00.781 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:00.781 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:00.781 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:00.781 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:00.781 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:00.781 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:00.781 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:00.781 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:00.781 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:00.781 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.781 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.781 16:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.687 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:02.687 00:29:02.687 real 0m47.473s 00:29:02.687 user 2m58.056s 00:29:02.687 sys 0m19.384s 00:29:02.687 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:02.687 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:02.687 ************************************ 00:29:02.687 END TEST nvmf_ns_hotplug_stress 00:29:02.687 ************************************ 00:29:02.947 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:02.947 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:02.947 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:02.947 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:02.947 ************************************ 00:29:02.947 START TEST nvmf_delete_subsystem 00:29:02.947 ************************************ 00:29:02.947 16:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:02.947 * Looking for test storage... 00:29:02.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:02.947 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:02.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.948 --rc genhtml_branch_coverage=1 00:29:02.948 --rc genhtml_function_coverage=1 00:29:02.948 --rc genhtml_legend=1 00:29:02.948 --rc geninfo_all_blocks=1 00:29:02.948 --rc geninfo_unexecuted_blocks=1 00:29:02.948 00:29:02.948 ' 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:02.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.948 --rc genhtml_branch_coverage=1 00:29:02.948 --rc genhtml_function_coverage=1 00:29:02.948 --rc genhtml_legend=1 00:29:02.948 --rc geninfo_all_blocks=1 00:29:02.948 --rc geninfo_unexecuted_blocks=1 00:29:02.948 00:29:02.948 ' 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:02.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.948 --rc genhtml_branch_coverage=1 00:29:02.948 --rc genhtml_function_coverage=1 00:29:02.948 --rc genhtml_legend=1 00:29:02.948 --rc geninfo_all_blocks=1 00:29:02.948 --rc geninfo_unexecuted_blocks=1 00:29:02.948 00:29:02.948 ' 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:02.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.948 --rc genhtml_branch_coverage=1 00:29:02.948 --rc genhtml_function_coverage=1 00:29:02.948 --rc genhtml_legend=1 00:29:02.948 --rc geninfo_all_blocks=1 00:29:02.948 --rc geninfo_unexecuted_blocks=1 00:29:02.948 00:29:02.948 ' 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:02.948 16:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:09.521 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:09.521 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:09.521 Found net devices under 0000:af:00.0: cvl_0_0 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:09.521 Found net devices under 0000:af:00.1: cvl_0_1 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.521 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:09.522 16:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:09.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:29:09.522 00:29:09.522 --- 10.0.0.2 ping statistics --- 00:29:09.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.522 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:29:09.522 00:29:09.522 --- 10.0.0.1 ping statistics --- 00:29:09.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.522 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2176988 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2176988 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2176988 ']' 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:09.522 [2024-12-09 16:02:04.190380] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:09.522 [2024-12-09 16:02:04.191390] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:29:09.522 [2024-12-09 16:02:04.191429] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.522 [2024-12-09 16:02:04.271752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:09.522 [2024-12-09 16:02:04.311633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.522 [2024-12-09 16:02:04.311668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.522 [2024-12-09 16:02:04.311676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.522 [2024-12-09 16:02:04.311682] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.522 [2024-12-09 16:02:04.311687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.522 [2024-12-09 16:02:04.312848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.522 [2024-12-09 16:02:04.312848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.522 [2024-12-09 16:02:04.379996] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:09.522 [2024-12-09 16:02:04.380521] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:09.522 [2024-12-09 16:02:04.380702] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:09.522 [2024-12-09 16:02:04.445708] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:09.522 [2024-12-09 16:02:04.473951] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:09.522 NULL1 00:29:09.522 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.523 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:09.523 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.523 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:09.523 Delay0 00:29:09.523 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.523 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:09.523 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.523 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:09.523 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.523 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2177125 00:29:09.523 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:09.523 16:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:09.523 [2024-12-09 16:02:04.584476] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:11.425 16:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:11.425 16:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.425 16:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:11.684 Write completed with error (sct=0, sc=8) 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 Write completed with error (sct=0, sc=8) 00:29:11.684 starting I/O failed: -6 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 Write completed with error (sct=0, sc=8) 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 starting I/O failed: -6 00:29:11.684 Write completed with error (sct=0, sc=8) 00:29:11.684 Write completed with error (sct=0, sc=8) 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 Write completed with error (sct=0, sc=8) 00:29:11.684 starting I/O failed: -6 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 Write completed with error (sct=0, sc=8) 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 starting I/O failed: -6 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 Write completed with error (sct=0, sc=8) 00:29:11.684 Write completed with error (sct=0, sc=8) 00:29:11.684 starting I/O failed: -6 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 Write completed with error (sct=0, sc=8) 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 Write completed with error (sct=0, sc=8) 00:29:11.684 starting I/O failed: -6 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 starting I/O failed: -6 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.684 starting I/O failed: -6 00:29:11.684 Write completed with error (sct=0, sc=8) 00:29:11.684 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 starting I/O failed: -6 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 starting I/O failed: -6 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 starting I/O failed: -6 00:29:11.685 [2024-12-09 16:02:06.675750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198db40 is same with the state(6) to be set 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 [2024-12-09 16:02:06.676475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d780 is same with the state(6) to be set 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 starting I/O failed: -6 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 starting I/O failed: -6 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 starting I/O failed: -6 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 starting I/O failed: -6 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 starting I/O failed: -6 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 starting I/O failed: -6 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 starting I/O failed: -6 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 starting I/O failed: -6 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 starting I/O failed: -6 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 starting I/O failed: -6 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 [2024-12-09 16:02:06.676836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6eb400d490 is same with the state(6) to be set 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:11.685 Write completed with error (sct=0, sc=8) 00:29:11.685 Read completed with error (sct=0, sc=8) 00:29:12.759 [2024-12-09 16:02:07.637581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198e9b0 is same with the state(6) to be set 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 [2024-12-09 16:02:07.677384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6eb400d7c0 is same with the state(6) to be set 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 [2024-12-09 16:02:07.677797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6eb400d020 is same with the state(6) to be set 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 [2024-12-09 16:02:07.678932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d2c0 is same with the state(6) to be set 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Write completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 Read completed with error (sct=0, sc=8) 00:29:12.759 [2024-12-09 16:02:07.679992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d960 is same with the state(6) to be set 00:29:12.759 Initializing NVMe Controllers 00:29:12.759 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:12.759 Controller IO queue size 128, less than required. 00:29:12.759 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:12.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:12.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:12.759 Initialization complete. Launching workers. 00:29:12.759 ======================================================== 00:29:12.759 Latency(us) 00:29:12.759 Device Information : IOPS MiB/s Average min max 00:29:12.759 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.29 0.08 916139.32 400.82 2001710.19 00:29:12.759 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.85 0.08 939635.68 225.11 1999778.34 00:29:12.759 ======================================================== 00:29:12.759 Total : 326.13 0.16 927583.51 225.11 2001710.19 00:29:12.759 00:29:12.759 [2024-12-09 16:02:07.680381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198e9b0 (9): Bad file descriptor 00:29:12.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:12.759 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.759 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:12.759 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2177125 00:29:12.759 16:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2177125 00:29:13.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2177125) - No such process 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2177125 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2177125 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2177125 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:13.018 [2024-12-09 16:02:08.213936] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.018 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:13.019 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.019 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:13.019 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.019 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2177596 00:29:13.019 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:13.019 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:13.019 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2177596 00:29:13.019 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:13.277 [2024-12-09 16:02:08.279464] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:13.536 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:13.536 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2177596 00:29:13.536 16:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:14.103 16:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:14.103 16:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2177596 00:29:14.103 16:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:14.670 16:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:14.670 16:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2177596 00:29:14.670 16:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:15.237 16:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:15.237 16:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2177596 00:29:15.237 16:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:15.804 16:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:15.804 16:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2177596 00:29:15.804 16:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:16.063 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:16.063 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2177596 00:29:16.063 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:16.322 Initializing NVMe Controllers 00:29:16.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:16.322 Controller IO queue size 128, less than required. 00:29:16.322 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:16.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:16.322 Initialization complete. Launching workers. 00:29:16.322 ======================================================== 00:29:16.322 Latency(us) 00:29:16.322 Device Information : IOPS MiB/s Average min max 00:29:16.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002145.46 1000147.19 1041269.25 00:29:16.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003667.82 1000157.83 1009524.31 00:29:16.322 ======================================================== 00:29:16.322 Total : 256.00 0.12 1002906.64 1000147.19 1041269.25 00:29:16.322 00:29:16.580 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:16.580 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2177596 00:29:16.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2177596) - No such process 00:29:16.580 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2177596 00:29:16.580 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:16.580 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:16.580 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:16.580 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:16.580 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:16.580 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:16.580 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:16.580 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:16.580 rmmod nvme_tcp 00:29:16.580 rmmod nvme_fabrics 00:29:16.839 rmmod nvme_keyring 00:29:16.839 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:16.839 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:16.839 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:16.839 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2176988 ']' 00:29:16.839 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2176988 00:29:16.839 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2176988 ']' 00:29:16.839 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2176988 00:29:16.839 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:16.839 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:16.839 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2176988 00:29:16.839 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:16.839 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:16.839 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2176988' 00:29:16.839 killing process with pid 2176988 00:29:16.839 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2176988 00:29:16.839 16:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2176988 00:29:16.839 16:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:16.839 16:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:16.839 16:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:16.839 16:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:16.839 16:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:16.839 16:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:17.098 16:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:17.098 16:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:17.098 16:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:17.098 16:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.098 16:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.098 16:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.003 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:19.003 00:29:19.003 real 0m16.182s 00:29:19.003 user 0m25.587s 00:29:19.003 sys 0m6.553s 00:29:19.003 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.003 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:19.003 ************************************ 00:29:19.003 END TEST nvmf_delete_subsystem 00:29:19.003 ************************************ 00:29:19.003 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:19.003 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:19.003 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.003 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:19.003 ************************************ 00:29:19.003 START TEST nvmf_host_management 00:29:19.003 ************************************ 00:29:19.003 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:19.263 * Looking for test storage... 00:29:19.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:19.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.263 --rc genhtml_branch_coverage=1 00:29:19.263 --rc genhtml_function_coverage=1 00:29:19.263 --rc genhtml_legend=1 00:29:19.263 --rc geninfo_all_blocks=1 00:29:19.263 --rc geninfo_unexecuted_blocks=1 00:29:19.263 00:29:19.263 ' 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:19.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.263 --rc genhtml_branch_coverage=1 00:29:19.263 --rc genhtml_function_coverage=1 00:29:19.263 --rc genhtml_legend=1 00:29:19.263 --rc geninfo_all_blocks=1 00:29:19.263 --rc geninfo_unexecuted_blocks=1 00:29:19.263 00:29:19.263 ' 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:19.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.263 --rc genhtml_branch_coverage=1 00:29:19.263 --rc genhtml_function_coverage=1 00:29:19.263 --rc genhtml_legend=1 00:29:19.263 --rc geninfo_all_blocks=1 00:29:19.263 --rc geninfo_unexecuted_blocks=1 00:29:19.263 00:29:19.263 ' 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:19.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.263 --rc genhtml_branch_coverage=1 00:29:19.263 --rc genhtml_function_coverage=1 00:29:19.263 --rc genhtml_legend=1 00:29:19.263 --rc geninfo_all_blocks=1 00:29:19.263 --rc geninfo_unexecuted_blocks=1 00:29:19.263 00:29:19.263 ' 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.263 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:19.264 16:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.830 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.830 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:25.830 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:25.830 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:25.830 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:25.830 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:25.830 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:25.830 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:25.830 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:25.830 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:25.830 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:25.830 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:25.830 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:25.830 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:25.831 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:25.831 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:25.831 Found net devices under 0000:af:00.0: cvl_0_0 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:25.831 Found net devices under 0000:af:00.1: cvl_0_1 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:25.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:29:25.831 00:29:25.831 --- 10.0.0.2 ping statistics --- 00:29:25.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.831 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:29:25.831 00:29:25.831 --- 10.0.0.1 ping statistics --- 00:29:25.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.831 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2181759 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2181759 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2181759 ']' 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.831 16:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.831 [2024-12-09 16:02:20.372003] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:25.831 [2024-12-09 16:02:20.372896] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:29:25.831 [2024-12-09 16:02:20.372929] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.831 [2024-12-09 16:02:20.452037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.831 [2024-12-09 16:02:20.494877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.831 [2024-12-09 16:02:20.494913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.831 [2024-12-09 16:02:20.494921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.831 [2024-12-09 16:02:20.494927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.831 [2024-12-09 16:02:20.494932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.831 [2024-12-09 16:02:20.496566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.831 [2024-12-09 16:02:20.496600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:25.831 [2024-12-09 16:02:20.496704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.831 [2024-12-09 16:02:20.496705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:25.831 [2024-12-09 16:02:20.566142] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:25.831 [2024-12-09 16:02:20.567092] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:25.831 [2024-12-09 16:02:20.567129] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:25.831 [2024-12-09 16:02:20.567278] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:25.831 [2024-12-09 16:02:20.567360] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:26.090 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:26.090 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:26.090 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:26.090 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:26.090 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.090 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.090 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:26.090 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.090 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.090 [2024-12-09 16:02:21.249623] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.090 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.090 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:26.090 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:26.090 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.091 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:26.091 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:26.091 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:26.091 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.091 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.350 Malloc0 00:29:26.350 [2024-12-09 16:02:21.349853] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2181869 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2181869 /var/tmp/bdevperf.sock 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2181869 ']' 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:26.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.350 { 00:29:26.350 "params": { 00:29:26.350 "name": "Nvme$subsystem", 00:29:26.350 "trtype": "$TEST_TRANSPORT", 00:29:26.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.350 "adrfam": "ipv4", 00:29:26.350 "trsvcid": "$NVMF_PORT", 00:29:26.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.350 "hdgst": ${hdgst:-false}, 00:29:26.350 "ddgst": ${ddgst:-false} 00:29:26.350 }, 00:29:26.350 "method": "bdev_nvme_attach_controller" 00:29:26.350 } 00:29:26.350 EOF 00:29:26.350 )") 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:26.350 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:26.350 "params": { 00:29:26.350 "name": "Nvme0", 00:29:26.350 "trtype": "tcp", 00:29:26.350 "traddr": "10.0.0.2", 00:29:26.350 "adrfam": "ipv4", 00:29:26.350 "trsvcid": "4420", 00:29:26.350 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.350 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:26.350 "hdgst": false, 00:29:26.350 "ddgst": false 00:29:26.350 }, 00:29:26.350 "method": "bdev_nvme_attach_controller" 00:29:26.350 }' 00:29:26.350 [2024-12-09 16:02:21.448288] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:29:26.350 [2024-12-09 16:02:21.448338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181869 ] 00:29:26.350 [2024-12-09 16:02:21.524686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.350 [2024-12-09 16:02:21.564058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.919 Running I/O for 10 seconds... 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:29:26.919 16:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:29:27.180 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:29:27.180 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:27.180 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:27.180 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:27.180 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.180 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:27.180 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.180 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=662 00:29:27.180 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 662 -ge 100 ']' 00:29:27.180 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:27.180 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:27.181 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:27.181 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:27.181 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.181 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:27.181 [2024-12-09 16:02:22.249280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2639a60 is same with the state(6) to be set 00:29:27.181 [2024-12-09 16:02:22.249773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.181 [2024-12-09 16:02:22.249802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.181 [2024-12-09 16:02:22.249820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.181 [2024-12-09 16:02:22.249827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.181 [2024-12-09 16:02:22.249836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.181 [2024-12-09 16:02:22.249844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.181 [2024-12-09 16:02:22.249852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.181 [2024-12-09 16:02:22.249859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.181 [2024-12-09 16:02:22.249867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.181 [2024-12-09 16:02:22.249874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.181 [2024-12-09 16:02:22.249882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.181 [2024-12-09 16:02:22.249889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.181 [2024-12-09 16:02:22.249897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.181 [2024-12-09 16:02:22.249903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.181 [2024-12-09 16:02:22.249911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.181 [2024-12-09 16:02:22.249918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.249926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.249932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.249940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.249946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.249954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.249965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.249973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.249980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.249988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.249994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.182 [2024-12-09 16:02:22.250525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.182 [2024-12-09 16:02:22.250531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.250539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.183 [2024-12-09 16:02:22.250548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.250556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.183 [2024-12-09 16:02:22.250563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.250571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.183 [2024-12-09 16:02:22.250577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.250585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.183 [2024-12-09 16:02:22.250592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.250600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.183 [2024-12-09 16:02:22.250607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.250615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.183 [2024-12-09 16:02:22.250621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.250629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.183 [2024-12-09 16:02:22.250636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.250644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.183 [2024-12-09 16:02:22.250650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.250658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.183 [2024-12-09 16:02:22.250666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.250674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.183 [2024-12-09 16:02:22.250681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.250689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.183 [2024-12-09 16:02:22.250695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.250703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.183 [2024-12-09 16:02:22.250710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.250718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.183 [2024-12-09 16:02:22.250724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.250737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.183 [2024-12-09 16:02:22.250744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.250752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.183 [2024-12-09 16:02:22.250758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.250767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.183 [2024-12-09 16:02:22.250773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.250781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8254b0 is same with the state(6) to be set 00:29:27.183 [2024-12-09 16:02:22.251729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:27.183 task offset: 90112 on job bdev=Nvme0n1 fails 00:29:27.183 00:29:27.183 Latency(us) 00:29:27.183 [2024-12-09T15:02:22.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.183 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.183 Job: Nvme0n1 ended in about 0.37 seconds with error 00:29:27.183 Verification LBA range: start 0x0 length 0x400 00:29:27.183 Nvme0n1 : 0.37 1896.35 118.52 172.40 0.00 30065.47 4400.27 26339.23 00:29:27.183 [2024-12-09T15:02:22.411Z] =================================================================================================================== 00:29:27.183 [2024-12-09T15:02:22.411Z] Total : 1896.35 118.52 172.40 0.00 30065.47 4400.27 26339.23 00:29:27.183 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.183 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:27.183 [2024-12-09 16:02:22.254166] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:27.183 [2024-12-09 16:02:22.254188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x811aa0 (9): Bad file descriptor 00:29:27.183 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.183 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:27.183 [2024-12-09 16:02:22.255188] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:29:27.183 [2024-12-09 16:02:22.255320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:27.183 [2024-12-09 16:02:22.255343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.183 [2024-12-09 16:02:22.255359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:29:27.183 [2024-12-09 16:02:22.255366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:29:27.183 [2024-12-09 16:02:22.255373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.183 [2024-12-09 16:02:22.255380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x811aa0 00:29:27.183 [2024-12-09 16:02:22.255400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x811aa0 (9): Bad file descriptor 00:29:27.183 [2024-12-09 16:02:22.255414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:27.183 [2024-12-09 16:02:22.255421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:27.183 [2024-12-09 16:02:22.255429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:27.183 [2024-12-09 16:02:22.255436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:27.183 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.183 16:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:28.120 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2181869 00:29:28.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2181869) - No such process 00:29:28.120 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:28.120 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:28.120 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:28.120 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:28.120 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:28.120 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:28.120 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:28.120 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:28.120 { 00:29:28.120 "params": { 00:29:28.120 "name": "Nvme$subsystem", 00:29:28.120 "trtype": "$TEST_TRANSPORT", 00:29:28.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.120 "adrfam": "ipv4", 00:29:28.120 "trsvcid": "$NVMF_PORT", 00:29:28.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.120 "hdgst": ${hdgst:-false}, 00:29:28.120 "ddgst": ${ddgst:-false} 00:29:28.120 }, 00:29:28.120 "method": "bdev_nvme_attach_controller" 00:29:28.120 } 00:29:28.120 EOF 00:29:28.120 )") 00:29:28.120 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:28.120 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:28.120 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:28.120 16:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:28.120 "params": { 00:29:28.120 "name": "Nvme0", 00:29:28.120 "trtype": "tcp", 00:29:28.120 "traddr": "10.0.0.2", 00:29:28.120 "adrfam": "ipv4", 00:29:28.120 "trsvcid": "4420", 00:29:28.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:28.120 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:28.120 "hdgst": false, 00:29:28.120 "ddgst": false 00:29:28.120 }, 00:29:28.120 "method": "bdev_nvme_attach_controller" 00:29:28.120 }' 00:29:28.120 [2024-12-09 16:02:23.315434] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:29:28.120 [2024-12-09 16:02:23.315481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182273 ] 00:29:28.379 [2024-12-09 16:02:23.389212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.379 [2024-12-09 16:02:23.427273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.379 Running I/O for 1 seconds... 00:29:29.756 2048.00 IOPS, 128.00 MiB/s 00:29:29.756 Latency(us) 00:29:29.756 [2024-12-09T15:02:24.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.756 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.756 Verification LBA range: start 0x0 length 0x400 00:29:29.756 Nvme0n1 : 1.02 2079.09 129.94 0.00 0.00 30300.77 5742.20 26963.38 00:29:29.756 [2024-12-09T15:02:24.984Z] =================================================================================================================== 00:29:29.756 [2024-12-09T15:02:24.984Z] Total : 2079.09 129.94 0.00 0.00 30300.77 5742.20 26963.38 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:29.756 rmmod nvme_tcp 00:29:29.756 rmmod nvme_fabrics 00:29:29.756 rmmod nvme_keyring 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2181759 ']' 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2181759 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2181759 ']' 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2181759 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2181759 00:29:29.756 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:29.757 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:29.757 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2181759' 00:29:29.757 killing process with pid 2181759 00:29:29.757 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2181759 00:29:29.757 16:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2181759 00:29:30.017 [2024-12-09 16:02:25.030465] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:30.017 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:30.017 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:30.017 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:30.017 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:30.017 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:30.017 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:30.017 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:30.017 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:30.017 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:30.017 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.017 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.017 16:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.922 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:31.922 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:31.922 00:29:31.922 real 0m12.916s 00:29:31.922 user 0m17.780s 00:29:31.922 sys 0m6.412s 00:29:31.922 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:31.922 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:31.922 ************************************ 00:29:31.922 END TEST nvmf_host_management 00:29:31.922 ************************************ 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:32.181 ************************************ 00:29:32.181 START TEST nvmf_lvol 00:29:32.181 ************************************ 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:32.181 * Looking for test storage... 00:29:32.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:32.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.181 --rc genhtml_branch_coverage=1 00:29:32.181 --rc genhtml_function_coverage=1 00:29:32.181 --rc genhtml_legend=1 00:29:32.181 --rc geninfo_all_blocks=1 00:29:32.181 --rc geninfo_unexecuted_blocks=1 00:29:32.181 00:29:32.181 ' 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:32.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.181 --rc genhtml_branch_coverage=1 00:29:32.181 --rc genhtml_function_coverage=1 00:29:32.181 --rc genhtml_legend=1 00:29:32.181 --rc geninfo_all_blocks=1 00:29:32.181 --rc geninfo_unexecuted_blocks=1 00:29:32.181 00:29:32.181 ' 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:32.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.181 --rc genhtml_branch_coverage=1 00:29:32.181 --rc genhtml_function_coverage=1 00:29:32.181 --rc genhtml_legend=1 00:29:32.181 --rc geninfo_all_blocks=1 00:29:32.181 --rc geninfo_unexecuted_blocks=1 00:29:32.181 00:29:32.181 ' 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:32.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.181 --rc genhtml_branch_coverage=1 00:29:32.181 --rc genhtml_function_coverage=1 00:29:32.181 --rc genhtml_legend=1 00:29:32.181 --rc geninfo_all_blocks=1 00:29:32.181 --rc geninfo_unexecuted_blocks=1 00:29:32.181 00:29:32.181 ' 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.181 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:32.182 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.441 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.441 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.441 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:32.441 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:32.441 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:32.441 16:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:39.010 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:39.010 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:39.010 Found net devices under 0000:af:00.0: cvl_0_0 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:39.010 Found net devices under 0000:af:00.1: cvl_0_1 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:39.010 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:39.011 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:39.011 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:39.011 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:39.011 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:39.011 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:39.011 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:39.011 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:39.011 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:39.011 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:39.011 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:39.011 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:39.011 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:39.011 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:39.011 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:39.011 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:39.011 16:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:39.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:39.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:29:39.011 00:29:39.011 --- 10.0.0.2 ping statistics --- 00:29:39.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.011 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:39.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:39.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:29:39.011 00:29:39.011 --- 10.0.0.1 ping statistics --- 00:29:39.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.011 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2185997 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2185997 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2185997 ']' 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:39.011 [2024-12-09 16:02:33.367656] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:39.011 [2024-12-09 16:02:33.368553] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:29:39.011 [2024-12-09 16:02:33.368586] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:39.011 [2024-12-09 16:02:33.427449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:39.011 [2024-12-09 16:02:33.468051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:39.011 [2024-12-09 16:02:33.468086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:39.011 [2024-12-09 16:02:33.468093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:39.011 [2024-12-09 16:02:33.468098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:39.011 [2024-12-09 16:02:33.468104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:39.011 [2024-12-09 16:02:33.469415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.011 [2024-12-09 16:02:33.469525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.011 [2024-12-09 16:02:33.469526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:39.011 [2024-12-09 16:02:33.536046] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:39.011 [2024-12-09 16:02:33.536803] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:39.011 [2024-12-09 16:02:33.536916] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:39.011 [2024-12-09 16:02:33.537056] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:39.011 [2024-12-09 16:02:33.766280] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.011 16:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:39.011 16:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:39.011 16:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:39.011 16:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:39.011 16:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:39.270 16:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:39.529 16:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=936999c4-7024-473a-9899-f492feacdf5b 00:29:39.529 16:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 936999c4-7024-473a-9899-f492feacdf5b lvol 20 00:29:39.788 16:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2efc8a0e-384e-4f83-970a-0ddf72701500 00:29:39.788 16:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:39.788 16:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2efc8a0e-384e-4f83-970a-0ddf72701500 00:29:40.046 16:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:40.304 [2024-12-09 16:02:35.350137] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:40.304 16:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:40.563 16:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2186265 00:29:40.563 16:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:40.563 16:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:41.506 16:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2efc8a0e-384e-4f83-970a-0ddf72701500 MY_SNAPSHOT 00:29:41.764 16:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d6613014-f922-44f4-90f4-4f39b4539540 00:29:41.765 16:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2efc8a0e-384e-4f83-970a-0ddf72701500 30 00:29:42.023 16:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d6613014-f922-44f4-90f4-4f39b4539540 MY_CLONE 00:29:42.282 16:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1cbe2b29-4cda-4405-a1ec-a679d63d34fb 00:29:42.282 16:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1cbe2b29-4cda-4405-a1ec-a679d63d34fb 00:29:42.850 16:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2186265 00:29:50.964 Initializing NVMe Controllers 00:29:50.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:50.964 Controller IO queue size 128, less than required. 00:29:50.964 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:50.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:50.964 Initialization complete. Launching workers. 00:29:50.964 ======================================================== 00:29:50.964 Latency(us) 00:29:50.964 Device Information : IOPS MiB/s Average min max 00:29:50.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12438.40 48.59 10296.13 226.18 86374.75 00:29:50.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12238.00 47.80 10461.96 3457.32 46512.14 00:29:50.964 ======================================================== 00:29:50.964 Total : 24676.40 96.39 10378.37 226.18 86374.75 00:29:50.964 00:29:50.964 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:51.223 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2efc8a0e-384e-4f83-970a-0ddf72701500 00:29:51.223 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 936999c4-7024-473a-9899-f492feacdf5b 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:51.482 rmmod nvme_tcp 00:29:51.482 rmmod nvme_fabrics 00:29:51.482 rmmod nvme_keyring 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2185997 ']' 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2185997 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2185997 ']' 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2185997 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:51.482 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2185997 00:29:51.741 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:51.741 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:51.741 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2185997' 00:29:51.741 killing process with pid 2185997 00:29:51.741 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2185997 00:29:51.741 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2185997 00:29:51.741 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:51.741 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:51.741 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:51.741 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:51.741 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:51.741 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:51.741 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:51.741 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:51.741 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:51.741 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.741 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.741 16:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.277 16:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:54.277 00:29:54.277 real 0m21.801s 00:29:54.277 user 0m55.633s 00:29:54.277 sys 0m9.761s 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:54.277 ************************************ 00:29:54.277 END TEST nvmf_lvol 00:29:54.277 ************************************ 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:54.277 ************************************ 00:29:54.277 START TEST nvmf_lvs_grow 00:29:54.277 ************************************ 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:54.277 * Looking for test storage... 00:29:54.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:54.277 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:54.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.278 --rc genhtml_branch_coverage=1 00:29:54.278 --rc genhtml_function_coverage=1 00:29:54.278 --rc genhtml_legend=1 00:29:54.278 --rc geninfo_all_blocks=1 00:29:54.278 --rc geninfo_unexecuted_blocks=1 00:29:54.278 00:29:54.278 ' 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:54.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.278 --rc genhtml_branch_coverage=1 00:29:54.278 --rc genhtml_function_coverage=1 00:29:54.278 --rc genhtml_legend=1 00:29:54.278 --rc geninfo_all_blocks=1 00:29:54.278 --rc geninfo_unexecuted_blocks=1 00:29:54.278 00:29:54.278 ' 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:54.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.278 --rc genhtml_branch_coverage=1 00:29:54.278 --rc genhtml_function_coverage=1 00:29:54.278 --rc genhtml_legend=1 00:29:54.278 --rc geninfo_all_blocks=1 00:29:54.278 --rc geninfo_unexecuted_blocks=1 00:29:54.278 00:29:54.278 ' 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:54.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.278 --rc genhtml_branch_coverage=1 00:29:54.278 --rc genhtml_function_coverage=1 00:29:54.278 --rc genhtml_legend=1 00:29:54.278 --rc geninfo_all_blocks=1 00:29:54.278 --rc geninfo_unexecuted_blocks=1 00:29:54.278 00:29:54.278 ' 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:54.278 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:54.279 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:54.279 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:54.279 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:54.279 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:54.279 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.279 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:54.279 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:54.279 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:54.279 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.279 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.279 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.279 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:54.279 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:54.279 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:54.279 16:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:00.848 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:00.848 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:00.848 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:00.849 Found net devices under 0000:af:00.0: cvl_0_0 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:00.849 Found net devices under 0000:af:00.1: cvl_0_1 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:00.849 16:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:00.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:00.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:30:00.849 00:30:00.849 --- 10.0.0.2 ping statistics --- 00:30:00.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.849 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:00.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:00.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:30:00.849 00:30:00.849 --- 10.0.0.1 ping statistics --- 00:30:00.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.849 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2191565 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2191565 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2191565 ']' 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:00.849 [2024-12-09 16:02:55.347308] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:00.849 [2024-12-09 16:02:55.348198] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:30:00.849 [2024-12-09 16:02:55.348236] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.849 [2024-12-09 16:02:55.424314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.849 [2024-12-09 16:02:55.463500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.849 [2024-12-09 16:02:55.463535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.849 [2024-12-09 16:02:55.463542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.849 [2024-12-09 16:02:55.463548] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.849 [2024-12-09 16:02:55.463553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.849 [2024-12-09 16:02:55.464071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.849 [2024-12-09 16:02:55.531445] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:00.849 [2024-12-09 16:02:55.531670] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.849 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:00.850 [2024-12-09 16:02:55.764710] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.850 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:00.850 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:00.850 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:00.850 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:00.850 ************************************ 00:30:00.850 START TEST lvs_grow_clean 00:30:00.850 ************************************ 00:30:00.850 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:00.850 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:00.850 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:00.850 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:00.850 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:00.850 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:00.850 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:00.850 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:00.850 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:00.850 16:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:00.850 16:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:00.850 16:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:01.109 16:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=afd8eba3-5bae-482b-873f-ece6d501e7c4 00:30:01.109 16:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afd8eba3-5bae-482b-873f-ece6d501e7c4 00:30:01.109 16:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:01.368 16:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:01.368 16:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:01.368 16:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u afd8eba3-5bae-482b-873f-ece6d501e7c4 lvol 150 00:30:01.626 16:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=85fa7f46-46d9-4091-9a83-0b9b4873bee7 00:30:01.627 16:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:01.627 16:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:01.627 [2024-12-09 16:02:56.800492] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:01.627 [2024-12-09 16:02:56.800621] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:01.627 true 00:30:01.627 16:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afd8eba3-5bae-482b-873f-ece6d501e7c4 00:30:01.627 16:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:01.886 16:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:01.886 16:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:02.144 16:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 85fa7f46-46d9-4091-9a83-0b9b4873bee7 00:30:02.403 16:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:02.403 [2024-12-09 16:02:57.576950] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:02.403 16:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:02.662 16:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:02.662 16:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2192056 00:30:02.662 16:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:02.662 16:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2192056 /var/tmp/bdevperf.sock 00:30:02.662 16:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2192056 ']' 00:30:02.662 16:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:02.662 16:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:02.662 16:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:02.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:02.662 16:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:02.662 16:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:02.662 [2024-12-09 16:02:57.821474] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:30:02.662 [2024-12-09 16:02:57.821518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192056 ] 00:30:02.920 [2024-12-09 16:02:57.896310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.920 [2024-12-09 16:02:57.937895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.920 16:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:02.920 16:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:02.920 16:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:03.179 Nvme0n1 00:30:03.179 16:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:03.438 [ 00:30:03.438 { 00:30:03.438 "name": "Nvme0n1", 00:30:03.438 "aliases": [ 00:30:03.438 "85fa7f46-46d9-4091-9a83-0b9b4873bee7" 00:30:03.438 ], 00:30:03.438 "product_name": "NVMe disk", 00:30:03.438 "block_size": 4096, 00:30:03.438 "num_blocks": 38912, 00:30:03.438 "uuid": "85fa7f46-46d9-4091-9a83-0b9b4873bee7", 00:30:03.438 "numa_id": 1, 00:30:03.438 "assigned_rate_limits": { 00:30:03.438 "rw_ios_per_sec": 0, 00:30:03.438 "rw_mbytes_per_sec": 0, 00:30:03.438 "r_mbytes_per_sec": 0, 00:30:03.438 "w_mbytes_per_sec": 0 00:30:03.438 }, 00:30:03.438 "claimed": false, 00:30:03.438 "zoned": false, 00:30:03.438 "supported_io_types": { 00:30:03.438 "read": true, 00:30:03.438 "write": true, 00:30:03.438 "unmap": true, 00:30:03.438 "flush": true, 00:30:03.438 "reset": true, 00:30:03.438 "nvme_admin": true, 00:30:03.438 "nvme_io": true, 00:30:03.438 "nvme_io_md": false, 00:30:03.438 "write_zeroes": true, 00:30:03.438 "zcopy": false, 00:30:03.438 "get_zone_info": false, 00:30:03.438 "zone_management": false, 00:30:03.438 "zone_append": false, 00:30:03.438 "compare": true, 00:30:03.438 "compare_and_write": true, 00:30:03.438 "abort": true, 00:30:03.438 "seek_hole": false, 00:30:03.438 "seek_data": false, 00:30:03.438 "copy": true, 00:30:03.438 "nvme_iov_md": false 00:30:03.438 }, 00:30:03.438 "memory_domains": [ 00:30:03.438 { 00:30:03.438 "dma_device_id": "system", 00:30:03.438 "dma_device_type": 1 00:30:03.438 } 00:30:03.438 ], 00:30:03.438 "driver_specific": { 00:30:03.438 "nvme": [ 00:30:03.438 { 00:30:03.438 "trid": { 00:30:03.438 "trtype": "TCP", 00:30:03.438 "adrfam": "IPv4", 00:30:03.438 "traddr": "10.0.0.2", 00:30:03.438 "trsvcid": "4420", 00:30:03.438 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:03.438 }, 00:30:03.438 "ctrlr_data": { 00:30:03.438 "cntlid": 1, 00:30:03.438 "vendor_id": "0x8086", 00:30:03.438 "model_number": "SPDK bdev Controller", 00:30:03.438 "serial_number": "SPDK0", 00:30:03.438 "firmware_revision": "25.01", 00:30:03.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:03.438 "oacs": { 00:30:03.438 "security": 0, 00:30:03.438 "format": 0, 00:30:03.438 "firmware": 0, 00:30:03.438 "ns_manage": 0 00:30:03.438 }, 00:30:03.438 "multi_ctrlr": true, 00:30:03.438 "ana_reporting": false 00:30:03.438 }, 00:30:03.438 "vs": { 00:30:03.438 "nvme_version": "1.3" 00:30:03.438 }, 00:30:03.438 "ns_data": { 00:30:03.438 "id": 1, 00:30:03.438 "can_share": true 00:30:03.438 } 00:30:03.438 } 00:30:03.438 ], 00:30:03.438 "mp_policy": "active_passive" 00:30:03.438 } 00:30:03.438 } 00:30:03.438 ] 00:30:03.438 16:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2192070 00:30:03.438 16:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:03.438 16:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:03.438 Running I/O for 10 seconds... 00:30:04.910 Latency(us) 00:30:04.910 [2024-12-09T15:03:00.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:04.910 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:30:04.910 [2024-12-09T15:03:00.138Z] =================================================================================================================== 00:30:04.910 [2024-12-09T15:03:00.138Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:30:04.910 00:30:05.511 16:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u afd8eba3-5bae-482b-873f-ece6d501e7c4 00:30:05.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:05.511 Nvme0n1 : 2.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:30:05.511 [2024-12-09T15:03:00.739Z] =================================================================================================================== 00:30:05.511 [2024-12-09T15:03:00.739Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:30:05.511 00:30:05.770 true 00:30:05.770 16:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afd8eba3-5bae-482b-873f-ece6d501e7c4 00:30:05.770 16:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:05.770 16:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:05.770 16:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:05.770 16:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2192070 00:30:06.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:06.706 Nvme0n1 : 3.00 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:30:06.706 [2024-12-09T15:03:01.934Z] =================================================================================================================== 00:30:06.706 [2024-12-09T15:03:01.934Z] Total : 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:30:06.706 00:30:07.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:07.642 Nvme0n1 : 4.00 23431.50 91.53 0.00 0.00 0.00 0.00 0.00 00:30:07.642 [2024-12-09T15:03:02.870Z] =================================================================================================================== 00:30:07.642 [2024-12-09T15:03:02.870Z] Total : 23431.50 91.53 0.00 0.00 0.00 0.00 0.00 00:30:07.642 00:30:08.579 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:08.579 Nvme0n1 : 5.00 23520.40 91.88 0.00 0.00 0.00 0.00 0.00 00:30:08.579 [2024-12-09T15:03:03.807Z] =================================================================================================================== 00:30:08.579 [2024-12-09T15:03:03.807Z] Total : 23520.40 91.88 0.00 0.00 0.00 0.00 0.00 00:30:08.579 00:30:09.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:09.515 Nvme0n1 : 6.00 23579.67 92.11 0.00 0.00 0.00 0.00 0.00 00:30:09.515 [2024-12-09T15:03:04.743Z] =================================================================================================================== 00:30:09.515 [2024-12-09T15:03:04.743Z] Total : 23579.67 92.11 0.00 0.00 0.00 0.00 0.00 00:30:09.515 00:30:10.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:10.890 Nvme0n1 : 7.00 23640.14 92.34 0.00 0.00 0.00 0.00 0.00 00:30:10.890 [2024-12-09T15:03:06.118Z] =================================================================================================================== 00:30:10.890 [2024-12-09T15:03:06.118Z] Total : 23640.14 92.34 0.00 0.00 0.00 0.00 0.00 00:30:10.890 00:30:11.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:11.827 Nvme0n1 : 8.00 23582.38 92.12 0.00 0.00 0.00 0.00 0.00 00:30:11.827 [2024-12-09T15:03:07.055Z] =================================================================================================================== 00:30:11.827 [2024-12-09T15:03:07.055Z] Total : 23582.38 92.12 0.00 0.00 0.00 0.00 0.00 00:30:11.827 00:30:12.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:12.763 Nvme0n1 : 9.00 23597.56 92.18 0.00 0.00 0.00 0.00 0.00 00:30:12.763 [2024-12-09T15:03:07.991Z] =================================================================================================================== 00:30:12.763 [2024-12-09T15:03:07.991Z] Total : 23597.56 92.18 0.00 0.00 0.00 0.00 0.00 00:30:12.763 00:30:13.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.701 Nvme0n1 : 10.00 23612.70 92.24 0.00 0.00 0.00 0.00 0.00 00:30:13.701 [2024-12-09T15:03:08.929Z] =================================================================================================================== 00:30:13.701 [2024-12-09T15:03:08.929Z] Total : 23612.70 92.24 0.00 0.00 0.00 0.00 0.00 00:30:13.701 00:30:13.701 00:30:13.701 Latency(us) 00:30:13.701 [2024-12-09T15:03:08.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.701 Nvme0n1 : 10.00 23616.91 92.25 0.00 0.00 5416.80 3354.82 26339.23 00:30:13.701 [2024-12-09T15:03:08.929Z] =================================================================================================================== 00:30:13.701 [2024-12-09T15:03:08.929Z] Total : 23616.91 92.25 0.00 0.00 5416.80 3354.82 26339.23 00:30:13.701 { 00:30:13.701 "results": [ 00:30:13.701 { 00:30:13.701 "job": "Nvme0n1", 00:30:13.701 "core_mask": "0x2", 00:30:13.701 "workload": "randwrite", 00:30:13.701 "status": "finished", 00:30:13.701 "queue_depth": 128, 00:30:13.701 "io_size": 4096, 00:30:13.701 "runtime": 10.003639, 00:30:13.701 "iops": 23616.905807976476, 00:30:13.701 "mibps": 92.25353831240811, 00:30:13.701 "io_failed": 0, 00:30:13.701 "io_timeout": 0, 00:30:13.701 "avg_latency_us": 5416.800541110241, 00:30:13.701 "min_latency_us": 3354.8190476190475, 00:30:13.701 "max_latency_us": 26339.230476190478 00:30:13.701 } 00:30:13.701 ], 00:30:13.701 "core_count": 1 00:30:13.701 } 00:30:13.701 16:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2192056 00:30:13.701 16:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2192056 ']' 00:30:13.701 16:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2192056 00:30:13.701 16:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:13.701 16:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:13.701 16:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2192056 00:30:13.701 16:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:13.701 16:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:13.701 16:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2192056' 00:30:13.701 killing process with pid 2192056 00:30:13.701 16:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2192056 00:30:13.701 Received shutdown signal, test time was about 10.000000 seconds 00:30:13.701 00:30:13.701 Latency(us) 00:30:13.701 [2024-12-09T15:03:08.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.701 [2024-12-09T15:03:08.929Z] =================================================================================================================== 00:30:13.701 [2024-12-09T15:03:08.929Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:13.701 16:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2192056 00:30:13.960 16:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:13.960 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:14.219 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afd8eba3-5bae-482b-873f-ece6d501e7c4 00:30:14.219 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:14.478 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:14.478 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:14.478 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:14.478 [2024-12-09 16:03:09.672525] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:14.737 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afd8eba3-5bae-482b-873f-ece6d501e7c4 00:30:14.737 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:14.737 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afd8eba3-5bae-482b-873f-ece6d501e7c4 00:30:14.737 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:14.737 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:14.737 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:14.737 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:14.737 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:14.737 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:14.737 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:14.737 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:14.737 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afd8eba3-5bae-482b-873f-ece6d501e7c4 00:30:14.737 request: 00:30:14.737 { 00:30:14.737 "uuid": "afd8eba3-5bae-482b-873f-ece6d501e7c4", 00:30:14.737 "method": "bdev_lvol_get_lvstores", 00:30:14.737 "req_id": 1 00:30:14.737 } 00:30:14.737 Got JSON-RPC error response 00:30:14.737 response: 00:30:14.737 { 00:30:14.737 "code": -19, 00:30:14.737 "message": "No such device" 00:30:14.737 } 00:30:14.737 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:14.737 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:14.737 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:14.737 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:14.737 16:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:14.996 aio_bdev 00:30:14.996 16:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 85fa7f46-46d9-4091-9a83-0b9b4873bee7 00:30:14.996 16:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=85fa7f46-46d9-4091-9a83-0b9b4873bee7 00:30:14.996 16:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:14.996 16:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:14.996 16:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:14.996 16:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:14.996 16:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:15.255 16:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 85fa7f46-46d9-4091-9a83-0b9b4873bee7 -t 2000 00:30:15.514 [ 00:30:15.514 { 00:30:15.514 "name": "85fa7f46-46d9-4091-9a83-0b9b4873bee7", 00:30:15.514 "aliases": [ 00:30:15.514 "lvs/lvol" 00:30:15.514 ], 00:30:15.514 "product_name": "Logical Volume", 00:30:15.514 "block_size": 4096, 00:30:15.514 "num_blocks": 38912, 00:30:15.514 "uuid": "85fa7f46-46d9-4091-9a83-0b9b4873bee7", 00:30:15.514 "assigned_rate_limits": { 00:30:15.514 "rw_ios_per_sec": 0, 00:30:15.514 "rw_mbytes_per_sec": 0, 00:30:15.514 "r_mbytes_per_sec": 0, 00:30:15.514 "w_mbytes_per_sec": 0 00:30:15.514 }, 00:30:15.514 "claimed": false, 00:30:15.514 "zoned": false, 00:30:15.514 "supported_io_types": { 00:30:15.514 "read": true, 00:30:15.514 "write": true, 00:30:15.514 "unmap": true, 00:30:15.514 "flush": false, 00:30:15.514 "reset": true, 00:30:15.514 "nvme_admin": false, 00:30:15.514 "nvme_io": false, 00:30:15.514 "nvme_io_md": false, 00:30:15.514 "write_zeroes": true, 00:30:15.514 "zcopy": false, 00:30:15.514 "get_zone_info": false, 00:30:15.514 "zone_management": false, 00:30:15.514 "zone_append": false, 00:30:15.514 "compare": false, 00:30:15.514 "compare_and_write": false, 00:30:15.514 "abort": false, 00:30:15.514 "seek_hole": true, 00:30:15.514 "seek_data": true, 00:30:15.514 "copy": false, 00:30:15.515 "nvme_iov_md": false 00:30:15.515 }, 00:30:15.515 "driver_specific": { 00:30:15.515 "lvol": { 00:30:15.515 "lvol_store_uuid": "afd8eba3-5bae-482b-873f-ece6d501e7c4", 00:30:15.515 "base_bdev": "aio_bdev", 00:30:15.515 "thin_provision": false, 00:30:15.515 "num_allocated_clusters": 38, 00:30:15.515 "snapshot": false, 00:30:15.515 "clone": false, 00:30:15.515 "esnap_clone": false 00:30:15.515 } 00:30:15.515 } 00:30:15.515 } 00:30:15.515 ] 00:30:15.515 16:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:15.515 16:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afd8eba3-5bae-482b-873f-ece6d501e7c4 00:30:15.515 16:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:15.515 16:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:15.515 16:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afd8eba3-5bae-482b-873f-ece6d501e7c4 00:30:15.515 16:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:15.774 16:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:15.774 16:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 85fa7f46-46d9-4091-9a83-0b9b4873bee7 00:30:16.032 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u afd8eba3-5bae-482b-873f-ece6d501e7c4 00:30:16.291 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:16.291 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:16.291 00:30:16.291 real 0m15.677s 00:30:16.291 user 0m15.118s 00:30:16.291 sys 0m1.534s 00:30:16.291 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:16.291 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:16.291 ************************************ 00:30:16.291 END TEST lvs_grow_clean 00:30:16.291 ************************************ 00:30:16.550 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:16.550 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:16.550 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:16.550 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:16.550 ************************************ 00:30:16.550 START TEST lvs_grow_dirty 00:30:16.550 ************************************ 00:30:16.550 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:16.550 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:16.550 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:16.550 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:16.550 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:16.550 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:16.550 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:16.550 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:16.550 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:16.551 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:16.809 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:16.809 16:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:17.068 16:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e7e8db6c-3c23-49f5-8eb5-6d17fffee549 00:30:17.068 16:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7e8db6c-3c23-49f5-8eb5-6d17fffee549 00:30:17.068 16:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:17.068 16:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:17.068 16:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:17.068 16:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e7e8db6c-3c23-49f5-8eb5-6d17fffee549 lvol 150 00:30:17.326 16:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c937c2d2-b122-4acf-96f9-f3b755cef425 00:30:17.326 16:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:17.326 16:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:17.585 [2024-12-09 16:03:12.600492] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:17.585 [2024-12-09 16:03:12.600622] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:17.585 true 00:30:17.585 16:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7e8db6c-3c23-49f5-8eb5-6d17fffee549 00:30:17.585 16:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:17.844 16:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:17.844 16:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:17.844 16:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c937c2d2-b122-4acf-96f9-f3b755cef425 00:30:18.103 16:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:18.362 [2024-12-09 16:03:13.344856] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.362 16:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:18.362 16:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2195120 00:30:18.362 16:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:18.362 16:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:18.362 16:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2195120 /var/tmp/bdevperf.sock 00:30:18.362 16:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2195120 ']' 00:30:18.362 16:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:18.362 16:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:18.362 16:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:18.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:18.362 16:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:18.362 16:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:18.362 [2024-12-09 16:03:13.581405] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:30:18.362 [2024-12-09 16:03:13.581454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2195120 ] 00:30:18.621 [2024-12-09 16:03:13.656911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.621 [2024-12-09 16:03:13.695919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.621 16:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:18.621 16:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:18.621 16:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:18.879 Nvme0n1 00:30:18.880 16:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:19.138 [ 00:30:19.138 { 00:30:19.138 "name": "Nvme0n1", 00:30:19.138 "aliases": [ 00:30:19.138 "c937c2d2-b122-4acf-96f9-f3b755cef425" 00:30:19.138 ], 00:30:19.138 "product_name": "NVMe disk", 00:30:19.138 "block_size": 4096, 00:30:19.138 "num_blocks": 38912, 00:30:19.138 "uuid": "c937c2d2-b122-4acf-96f9-f3b755cef425", 00:30:19.138 "numa_id": 1, 00:30:19.138 "assigned_rate_limits": { 00:30:19.138 "rw_ios_per_sec": 0, 00:30:19.138 "rw_mbytes_per_sec": 0, 00:30:19.138 "r_mbytes_per_sec": 0, 00:30:19.138 "w_mbytes_per_sec": 0 00:30:19.138 }, 00:30:19.138 "claimed": false, 00:30:19.138 "zoned": false, 00:30:19.138 "supported_io_types": { 00:30:19.138 "read": true, 00:30:19.138 "write": true, 00:30:19.138 "unmap": true, 00:30:19.138 "flush": true, 00:30:19.138 "reset": true, 00:30:19.138 "nvme_admin": true, 00:30:19.138 "nvme_io": true, 00:30:19.138 "nvme_io_md": false, 00:30:19.138 "write_zeroes": true, 00:30:19.138 "zcopy": false, 00:30:19.138 "get_zone_info": false, 00:30:19.139 "zone_management": false, 00:30:19.139 "zone_append": false, 00:30:19.139 "compare": true, 00:30:19.139 "compare_and_write": true, 00:30:19.139 "abort": true, 00:30:19.139 "seek_hole": false, 00:30:19.139 "seek_data": false, 00:30:19.139 "copy": true, 00:30:19.139 "nvme_iov_md": false 00:30:19.139 }, 00:30:19.139 "memory_domains": [ 00:30:19.139 { 00:30:19.139 "dma_device_id": "system", 00:30:19.139 "dma_device_type": 1 00:30:19.139 } 00:30:19.139 ], 00:30:19.139 "driver_specific": { 00:30:19.139 "nvme": [ 00:30:19.139 { 00:30:19.139 "trid": { 00:30:19.139 "trtype": "TCP", 00:30:19.139 "adrfam": "IPv4", 00:30:19.139 "traddr": "10.0.0.2", 00:30:19.139 "trsvcid": "4420", 00:30:19.139 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:19.139 }, 00:30:19.139 "ctrlr_data": { 00:30:19.139 "cntlid": 1, 00:30:19.139 "vendor_id": "0x8086", 00:30:19.139 "model_number": "SPDK bdev Controller", 00:30:19.139 "serial_number": "SPDK0", 00:30:19.139 "firmware_revision": "25.01", 00:30:19.139 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:19.139 "oacs": { 00:30:19.139 "security": 0, 00:30:19.139 "format": 0, 00:30:19.139 "firmware": 0, 00:30:19.139 "ns_manage": 0 00:30:19.139 }, 00:30:19.139 "multi_ctrlr": true, 00:30:19.139 "ana_reporting": false 00:30:19.139 }, 00:30:19.139 "vs": { 00:30:19.139 "nvme_version": "1.3" 00:30:19.139 }, 00:30:19.139 "ns_data": { 00:30:19.139 "id": 1, 00:30:19.139 "can_share": true 00:30:19.139 } 00:30:19.139 } 00:30:19.139 ], 00:30:19.139 "mp_policy": "active_passive" 00:30:19.139 } 00:30:19.139 } 00:30:19.139 ] 00:30:19.139 16:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2195136 00:30:19.139 16:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:19.139 16:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:19.139 Running I/O for 10 seconds... 00:30:20.514 Latency(us) 00:30:20.514 [2024-12-09T15:03:15.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.514 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:30:20.514 [2024-12-09T15:03:15.742Z] =================================================================================================================== 00:30:20.514 [2024-12-09T15:03:15.742Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:30:20.514 00:30:21.081 16:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e7e8db6c-3c23-49f5-8eb5-6d17fffee549 00:30:21.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:21.340 Nvme0n1 : 2.00 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:30:21.340 [2024-12-09T15:03:16.568Z] =================================================================================================================== 00:30:21.340 [2024-12-09T15:03:16.568Z] Total : 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:30:21.340 00:30:21.340 true 00:30:21.340 16:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7e8db6c-3c23-49f5-8eb5-6d17fffee549 00:30:21.340 16:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:21.599 16:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:21.599 16:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:21.599 16:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2195136 00:30:22.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:22.166 Nvme0n1 : 3.00 23410.33 91.45 0.00 0.00 0.00 0.00 0.00 00:30:22.166 [2024-12-09T15:03:17.394Z] =================================================================================================================== 00:30:22.166 [2024-12-09T15:03:17.394Z] Total : 23410.33 91.45 0.00 0.00 0.00 0.00 0.00 00:30:22.166 00:30:23.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:23.541 Nvme0n1 : 4.00 23495.00 91.78 0.00 0.00 0.00 0.00 0.00 00:30:23.541 [2024-12-09T15:03:18.769Z] =================================================================================================================== 00:30:23.541 [2024-12-09T15:03:18.769Z] Total : 23495.00 91.78 0.00 0.00 0.00 0.00 0.00 00:30:23.541 00:30:24.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:24.476 Nvme0n1 : 5.00 23571.20 92.08 0.00 0.00 0.00 0.00 0.00 00:30:24.476 [2024-12-09T15:03:19.704Z] =================================================================================================================== 00:30:24.476 [2024-12-09T15:03:19.704Z] Total : 23571.20 92.08 0.00 0.00 0.00 0.00 0.00 00:30:24.476 00:30:25.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:25.410 Nvme0n1 : 6.00 23600.83 92.19 0.00 0.00 0.00 0.00 0.00 00:30:25.410 [2024-12-09T15:03:20.638Z] =================================================================================================================== 00:30:25.410 [2024-12-09T15:03:20.638Z] Total : 23600.83 92.19 0.00 0.00 0.00 0.00 0.00 00:30:25.410 00:30:26.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:26.342 Nvme0n1 : 7.00 23640.14 92.34 0.00 0.00 0.00 0.00 0.00 00:30:26.342 [2024-12-09T15:03:21.570Z] =================================================================================================================== 00:30:26.342 [2024-12-09T15:03:21.570Z] Total : 23640.14 92.34 0.00 0.00 0.00 0.00 0.00 00:30:26.342 00:30:27.276 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:27.277 Nvme0n1 : 8.00 23669.62 92.46 0.00 0.00 0.00 0.00 0.00 00:30:27.277 [2024-12-09T15:03:22.505Z] =================================================================================================================== 00:30:27.277 [2024-12-09T15:03:22.505Z] Total : 23669.62 92.46 0.00 0.00 0.00 0.00 0.00 00:30:27.277 00:30:28.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:28.212 Nvme0n1 : 9.00 23678.44 92.49 0.00 0.00 0.00 0.00 0.00 00:30:28.212 [2024-12-09T15:03:23.440Z] =================================================================================================================== 00:30:28.212 [2024-12-09T15:03:23.440Z] Total : 23678.44 92.49 0.00 0.00 0.00 0.00 0.00 00:30:28.212 00:30:29.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:29.588 Nvme0n1 : 10.00 23698.20 92.57 0.00 0.00 0.00 0.00 0.00 00:30:29.588 [2024-12-09T15:03:24.816Z] =================================================================================================================== 00:30:29.588 [2024-12-09T15:03:24.816Z] Total : 23698.20 92.57 0.00 0.00 0.00 0.00 0.00 00:30:29.588 00:30:29.588 00:30:29.588 Latency(us) 00:30:29.588 [2024-12-09T15:03:24.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:29.588 Nvme0n1 : 10.00 23704.38 92.60 0.00 0.00 5396.96 4805.97 26588.89 00:30:29.588 [2024-12-09T15:03:24.816Z] =================================================================================================================== 00:30:29.588 [2024-12-09T15:03:24.816Z] Total : 23704.38 92.60 0.00 0.00 5396.96 4805.97 26588.89 00:30:29.588 { 00:30:29.588 "results": [ 00:30:29.588 { 00:30:29.588 "job": "Nvme0n1", 00:30:29.588 "core_mask": "0x2", 00:30:29.588 "workload": "randwrite", 00:30:29.588 "status": "finished", 00:30:29.588 "queue_depth": 128, 00:30:29.588 "io_size": 4096, 00:30:29.588 "runtime": 10.002792, 00:30:29.588 "iops": 23704.381736619136, 00:30:29.588 "mibps": 92.5952411586685, 00:30:29.588 "io_failed": 0, 00:30:29.588 "io_timeout": 0, 00:30:29.588 "avg_latency_us": 5396.963588288337, 00:30:29.588 "min_latency_us": 4805.973333333333, 00:30:29.588 "max_latency_us": 26588.891428571427 00:30:29.588 } 00:30:29.588 ], 00:30:29.588 "core_count": 1 00:30:29.588 } 00:30:29.588 16:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2195120 00:30:29.588 16:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2195120 ']' 00:30:29.588 16:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2195120 00:30:29.589 16:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:29.589 16:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:29.589 16:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2195120 00:30:29.589 16:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:29.589 16:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:29.589 16:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2195120' 00:30:29.589 killing process with pid 2195120 00:30:29.589 16:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2195120 00:30:29.589 Received shutdown signal, test time was about 10.000000 seconds 00:30:29.589 00:30:29.589 Latency(us) 00:30:29.589 [2024-12-09T15:03:24.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.589 [2024-12-09T15:03:24.817Z] =================================================================================================================== 00:30:29.589 [2024-12-09T15:03:24.817Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:29.589 16:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2195120 00:30:29.589 16:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:29.848 16:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:29.848 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7e8db6c-3c23-49f5-8eb5-6d17fffee549 00:30:29.848 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2191565 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2191565 00:30:30.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2191565 Killed "${NVMF_APP[@]}" "$@" 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2196940 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2196940 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2196940 ']' 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:30.114 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:30.114 [2024-12-09 16:03:25.312478] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:30.114 [2024-12-09 16:03:25.313380] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:30:30.114 [2024-12-09 16:03:25.313416] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:30.373 [2024-12-09 16:03:25.390804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.373 [2024-12-09 16:03:25.429483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:30.373 [2024-12-09 16:03:25.429517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:30.373 [2024-12-09 16:03:25.429524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:30.373 [2024-12-09 16:03:25.429530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:30.373 [2024-12-09 16:03:25.429535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:30.373 [2024-12-09 16:03:25.430058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.373 [2024-12-09 16:03:25.496188] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:30.373 [2024-12-09 16:03:25.496395] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:30.373 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:30.373 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:30.373 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:30.373 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:30.373 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:30.373 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.373 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:30.632 [2024-12-09 16:03:25.735471] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:30.632 [2024-12-09 16:03:25.735662] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:30.632 [2024-12-09 16:03:25.735745] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:30.632 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:30.632 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c937c2d2-b122-4acf-96f9-f3b755cef425 00:30:30.632 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c937c2d2-b122-4acf-96f9-f3b755cef425 00:30:30.632 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:30.632 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:30.632 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:30.632 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:30.632 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:30.891 16:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c937c2d2-b122-4acf-96f9-f3b755cef425 -t 2000 00:30:31.150 [ 00:30:31.150 { 00:30:31.150 "name": "c937c2d2-b122-4acf-96f9-f3b755cef425", 00:30:31.150 "aliases": [ 00:30:31.150 "lvs/lvol" 00:30:31.150 ], 00:30:31.150 "product_name": "Logical Volume", 00:30:31.150 "block_size": 4096, 00:30:31.150 "num_blocks": 38912, 00:30:31.150 "uuid": "c937c2d2-b122-4acf-96f9-f3b755cef425", 00:30:31.150 "assigned_rate_limits": { 00:30:31.150 "rw_ios_per_sec": 0, 00:30:31.150 "rw_mbytes_per_sec": 0, 00:30:31.150 "r_mbytes_per_sec": 0, 00:30:31.150 "w_mbytes_per_sec": 0 00:30:31.150 }, 00:30:31.150 "claimed": false, 00:30:31.150 "zoned": false, 00:30:31.150 "supported_io_types": { 00:30:31.150 "read": true, 00:30:31.150 "write": true, 00:30:31.150 "unmap": true, 00:30:31.150 "flush": false, 00:30:31.150 "reset": true, 00:30:31.150 "nvme_admin": false, 00:30:31.150 "nvme_io": false, 00:30:31.150 "nvme_io_md": false, 00:30:31.150 "write_zeroes": true, 00:30:31.150 "zcopy": false, 00:30:31.150 "get_zone_info": false, 00:30:31.150 "zone_management": false, 00:30:31.150 "zone_append": false, 00:30:31.150 "compare": false, 00:30:31.150 "compare_and_write": false, 00:30:31.150 "abort": false, 00:30:31.150 "seek_hole": true, 00:30:31.150 "seek_data": true, 00:30:31.150 "copy": false, 00:30:31.150 "nvme_iov_md": false 00:30:31.150 }, 00:30:31.150 "driver_specific": { 00:30:31.150 "lvol": { 00:30:31.150 "lvol_store_uuid": "e7e8db6c-3c23-49f5-8eb5-6d17fffee549", 00:30:31.150 "base_bdev": "aio_bdev", 00:30:31.150 "thin_provision": false, 00:30:31.150 "num_allocated_clusters": 38, 00:30:31.150 "snapshot": false, 00:30:31.150 "clone": false, 00:30:31.150 "esnap_clone": false 00:30:31.150 } 00:30:31.150 } 00:30:31.150 } 00:30:31.150 ] 00:30:31.150 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:31.150 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7e8db6c-3c23-49f5-8eb5-6d17fffee549 00:30:31.150 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:31.150 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:31.150 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7e8db6c-3c23-49f5-8eb5-6d17fffee549 00:30:31.150 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:31.410 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:31.410 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:31.668 [2024-12-09 16:03:26.718528] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:31.669 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7e8db6c-3c23-49f5-8eb5-6d17fffee549 00:30:31.669 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:31.669 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7e8db6c-3c23-49f5-8eb5-6d17fffee549 00:30:31.669 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:31.669 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:31.669 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:31.669 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:31.669 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:31.669 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:31.669 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:31.669 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:31.669 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7e8db6c-3c23-49f5-8eb5-6d17fffee549 00:30:31.928 request: 00:30:31.928 { 00:30:31.928 "uuid": "e7e8db6c-3c23-49f5-8eb5-6d17fffee549", 00:30:31.928 "method": "bdev_lvol_get_lvstores", 00:30:31.928 "req_id": 1 00:30:31.928 } 00:30:31.928 Got JSON-RPC error response 00:30:31.928 response: 00:30:31.928 { 00:30:31.928 "code": -19, 00:30:31.928 "message": "No such device" 00:30:31.928 } 00:30:31.928 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:31.928 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:31.928 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:31.928 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:31.928 16:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:31.928 aio_bdev 00:30:32.187 16:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c937c2d2-b122-4acf-96f9-f3b755cef425 00:30:32.187 16:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c937c2d2-b122-4acf-96f9-f3b755cef425 00:30:32.187 16:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:32.187 16:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:32.187 16:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:32.187 16:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:32.187 16:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:32.187 16:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c937c2d2-b122-4acf-96f9-f3b755cef425 -t 2000 00:30:32.446 [ 00:30:32.446 { 00:30:32.446 "name": "c937c2d2-b122-4acf-96f9-f3b755cef425", 00:30:32.446 "aliases": [ 00:30:32.446 "lvs/lvol" 00:30:32.446 ], 00:30:32.446 "product_name": "Logical Volume", 00:30:32.446 "block_size": 4096, 00:30:32.446 "num_blocks": 38912, 00:30:32.446 "uuid": "c937c2d2-b122-4acf-96f9-f3b755cef425", 00:30:32.446 "assigned_rate_limits": { 00:30:32.446 "rw_ios_per_sec": 0, 00:30:32.446 "rw_mbytes_per_sec": 0, 00:30:32.446 "r_mbytes_per_sec": 0, 00:30:32.446 "w_mbytes_per_sec": 0 00:30:32.446 }, 00:30:32.446 "claimed": false, 00:30:32.446 "zoned": false, 00:30:32.446 "supported_io_types": { 00:30:32.446 "read": true, 00:30:32.446 "write": true, 00:30:32.446 "unmap": true, 00:30:32.446 "flush": false, 00:30:32.446 "reset": true, 00:30:32.446 "nvme_admin": false, 00:30:32.446 "nvme_io": false, 00:30:32.446 "nvme_io_md": false, 00:30:32.446 "write_zeroes": true, 00:30:32.446 "zcopy": false, 00:30:32.446 "get_zone_info": false, 00:30:32.446 "zone_management": false, 00:30:32.446 "zone_append": false, 00:30:32.446 "compare": false, 00:30:32.446 "compare_and_write": false, 00:30:32.446 "abort": false, 00:30:32.446 "seek_hole": true, 00:30:32.446 "seek_data": true, 00:30:32.446 "copy": false, 00:30:32.446 "nvme_iov_md": false 00:30:32.446 }, 00:30:32.446 "driver_specific": { 00:30:32.446 "lvol": { 00:30:32.446 "lvol_store_uuid": "e7e8db6c-3c23-49f5-8eb5-6d17fffee549", 00:30:32.446 "base_bdev": "aio_bdev", 00:30:32.446 "thin_provision": false, 00:30:32.446 "num_allocated_clusters": 38, 00:30:32.446 "snapshot": false, 00:30:32.446 "clone": false, 00:30:32.446 "esnap_clone": false 00:30:32.446 } 00:30:32.446 } 00:30:32.446 } 00:30:32.446 ] 00:30:32.446 16:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:32.446 16:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7e8db6c-3c23-49f5-8eb5-6d17fffee549 00:30:32.446 16:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:32.705 16:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:32.705 16:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7e8db6c-3c23-49f5-8eb5-6d17fffee549 00:30:32.705 16:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:32.964 16:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:32.964 16:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c937c2d2-b122-4acf-96f9-f3b755cef425 00:30:32.964 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e7e8db6c-3c23-49f5-8eb5-6d17fffee549 00:30:33.223 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:33.482 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:33.482 00:30:33.482 real 0m16.961s 00:30:33.482 user 0m34.335s 00:30:33.482 sys 0m3.847s 00:30:33.482 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:33.483 ************************************ 00:30:33.483 END TEST lvs_grow_dirty 00:30:33.483 ************************************ 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:33.483 nvmf_trace.0 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.483 rmmod nvme_tcp 00:30:33.483 rmmod nvme_fabrics 00:30:33.483 rmmod nvme_keyring 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2196940 ']' 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2196940 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2196940 ']' 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2196940 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:33.483 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2196940 00:30:33.742 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:33.742 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:33.742 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2196940' 00:30:33.742 killing process with pid 2196940 00:30:33.742 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2196940 00:30:33.742 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2196940 00:30:33.742 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:33.742 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:33.742 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:33.742 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:33.742 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:33.742 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:33.742 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:33.742 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:33.742 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:33.742 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.742 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.742 16:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.278 16:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:36.278 00:30:36.278 real 0m41.943s 00:30:36.278 user 0m51.992s 00:30:36.278 sys 0m10.229s 00:30:36.278 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:36.278 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:36.278 ************************************ 00:30:36.278 END TEST nvmf_lvs_grow 00:30:36.278 ************************************ 00:30:36.278 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:36.278 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:36.278 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:36.278 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:36.278 ************************************ 00:30:36.278 START TEST nvmf_bdev_io_wait 00:30:36.278 ************************************ 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:36.279 * Looking for test storage... 00:30:36.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:36.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.279 --rc genhtml_branch_coverage=1 00:30:36.279 --rc genhtml_function_coverage=1 00:30:36.279 --rc genhtml_legend=1 00:30:36.279 --rc geninfo_all_blocks=1 00:30:36.279 --rc geninfo_unexecuted_blocks=1 00:30:36.279 00:30:36.279 ' 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:36.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.279 --rc genhtml_branch_coverage=1 00:30:36.279 --rc genhtml_function_coverage=1 00:30:36.279 --rc genhtml_legend=1 00:30:36.279 --rc geninfo_all_blocks=1 00:30:36.279 --rc geninfo_unexecuted_blocks=1 00:30:36.279 00:30:36.279 ' 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:36.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.279 --rc genhtml_branch_coverage=1 00:30:36.279 --rc genhtml_function_coverage=1 00:30:36.279 --rc genhtml_legend=1 00:30:36.279 --rc geninfo_all_blocks=1 00:30:36.279 --rc geninfo_unexecuted_blocks=1 00:30:36.279 00:30:36.279 ' 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:36.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.279 --rc genhtml_branch_coverage=1 00:30:36.279 --rc genhtml_function_coverage=1 00:30:36.279 --rc genhtml_legend=1 00:30:36.279 --rc geninfo_all_blocks=1 00:30:36.279 --rc geninfo_unexecuted_blocks=1 00:30:36.279 00:30:36.279 ' 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.279 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.280 16:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:42.851 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:42.851 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:42.851 Found net devices under 0000:af:00.0: cvl_0_0 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:42.851 Found net devices under 0000:af:00.1: cvl_0_1 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:42.851 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.852 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.852 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:42.852 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:42.852 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.852 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.852 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:42.852 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:42.852 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.852 16:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:42.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:30:42.852 00:30:42.852 --- 10.0.0.2 ping statistics --- 00:30:42.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.852 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:30:42.852 00:30:42.852 --- 10.0.0.1 ping statistics --- 00:30:42.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.852 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2200948 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2200948 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2200948 ']' 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.852 [2024-12-09 16:03:37.358971] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:42.852 [2024-12-09 16:03:37.359918] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:30:42.852 [2024-12-09 16:03:37.359955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.852 [2024-12-09 16:03:37.439103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:42.852 [2024-12-09 16:03:37.481031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.852 [2024-12-09 16:03:37.481067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.852 [2024-12-09 16:03:37.481074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.852 [2024-12-09 16:03:37.481079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.852 [2024-12-09 16:03:37.481085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.852 [2024-12-09 16:03:37.482608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.852 [2024-12-09 16:03:37.482715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:42.852 [2024-12-09 16:03:37.482821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.852 [2024-12-09 16:03:37.482822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:42.852 [2024-12-09 16:03:37.483072] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.852 [2024-12-09 16:03:37.617428] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:42.852 [2024-12-09 16:03:37.617570] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:42.852 [2024-12-09 16:03:37.617924] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:42.852 [2024-12-09 16:03:37.618151] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.852 [2024-12-09 16:03:37.627204] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.852 Malloc0 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.852 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.853 [2024-12-09 16:03:37.695576] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2201050 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2201053 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:42.853 { 00:30:42.853 "params": { 00:30:42.853 "name": "Nvme$subsystem", 00:30:42.853 "trtype": "$TEST_TRANSPORT", 00:30:42.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.853 "adrfam": "ipv4", 00:30:42.853 "trsvcid": "$NVMF_PORT", 00:30:42.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.853 "hdgst": ${hdgst:-false}, 00:30:42.853 "ddgst": ${ddgst:-false} 00:30:42.853 }, 00:30:42.853 "method": "bdev_nvme_attach_controller" 00:30:42.853 } 00:30:42.853 EOF 00:30:42.853 )") 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2201056 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:42.853 { 00:30:42.853 "params": { 00:30:42.853 "name": "Nvme$subsystem", 00:30:42.853 "trtype": "$TEST_TRANSPORT", 00:30:42.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.853 "adrfam": "ipv4", 00:30:42.853 "trsvcid": "$NVMF_PORT", 00:30:42.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.853 "hdgst": ${hdgst:-false}, 00:30:42.853 "ddgst": ${ddgst:-false} 00:30:42.853 }, 00:30:42.853 "method": "bdev_nvme_attach_controller" 00:30:42.853 } 00:30:42.853 EOF 00:30:42.853 )") 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2201060 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:42.853 { 00:30:42.853 "params": { 00:30:42.853 "name": "Nvme$subsystem", 00:30:42.853 "trtype": "$TEST_TRANSPORT", 00:30:42.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.853 "adrfam": "ipv4", 00:30:42.853 "trsvcid": "$NVMF_PORT", 00:30:42.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.853 "hdgst": ${hdgst:-false}, 00:30:42.853 "ddgst": ${ddgst:-false} 00:30:42.853 }, 00:30:42.853 "method": "bdev_nvme_attach_controller" 00:30:42.853 } 00:30:42.853 EOF 00:30:42.853 )") 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:42.853 { 00:30:42.853 "params": { 00:30:42.853 "name": "Nvme$subsystem", 00:30:42.853 "trtype": "$TEST_TRANSPORT", 00:30:42.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.853 "adrfam": "ipv4", 00:30:42.853 "trsvcid": "$NVMF_PORT", 00:30:42.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.853 "hdgst": ${hdgst:-false}, 00:30:42.853 "ddgst": ${ddgst:-false} 00:30:42.853 }, 00:30:42.853 "method": "bdev_nvme_attach_controller" 00:30:42.853 } 00:30:42.853 EOF 00:30:42.853 )") 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2201050 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:42.853 "params": { 00:30:42.853 "name": "Nvme1", 00:30:42.853 "trtype": "tcp", 00:30:42.853 "traddr": "10.0.0.2", 00:30:42.853 "adrfam": "ipv4", 00:30:42.853 "trsvcid": "4420", 00:30:42.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.853 "hdgst": false, 00:30:42.853 "ddgst": false 00:30:42.853 }, 00:30:42.853 "method": "bdev_nvme_attach_controller" 00:30:42.853 }' 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:42.853 "params": { 00:30:42.853 "name": "Nvme1", 00:30:42.853 "trtype": "tcp", 00:30:42.853 "traddr": "10.0.0.2", 00:30:42.853 "adrfam": "ipv4", 00:30:42.853 "trsvcid": "4420", 00:30:42.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.853 "hdgst": false, 00:30:42.853 "ddgst": false 00:30:42.853 }, 00:30:42.853 "method": "bdev_nvme_attach_controller" 00:30:42.853 }' 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:42.853 "params": { 00:30:42.853 "name": "Nvme1", 00:30:42.853 "trtype": "tcp", 00:30:42.853 "traddr": "10.0.0.2", 00:30:42.853 "adrfam": "ipv4", 00:30:42.853 "trsvcid": "4420", 00:30:42.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.853 "hdgst": false, 00:30:42.853 "ddgst": false 00:30:42.853 }, 00:30:42.853 "method": "bdev_nvme_attach_controller" 00:30:42.853 }' 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:42.853 16:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:42.853 "params": { 00:30:42.853 "name": "Nvme1", 00:30:42.853 "trtype": "tcp", 00:30:42.853 "traddr": "10.0.0.2", 00:30:42.853 "adrfam": "ipv4", 00:30:42.853 "trsvcid": "4420", 00:30:42.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.853 "hdgst": false, 00:30:42.853 "ddgst": false 00:30:42.853 }, 00:30:42.853 "method": "bdev_nvme_attach_controller" 00:30:42.853 }' 00:30:42.853 [2024-12-09 16:03:37.749208] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:30:42.853 [2024-12-09 16:03:37.749277] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:42.853 [2024-12-09 16:03:37.750171] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:30:42.853 [2024-12-09 16:03:37.750221] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:42.854 [2024-12-09 16:03:37.750900] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:30:42.854 [2024-12-09 16:03:37.750945] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:42.854 [2024-12-09 16:03:37.752867] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:30:42.854 [2024-12-09 16:03:37.752914] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:42.854 [2024-12-09 16:03:37.947134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.854 [2024-12-09 16:03:37.991907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:42.854 [2024-12-09 16:03:38.044351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.112 [2024-12-09 16:03:38.101489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.112 [2024-12-09 16:03:38.105228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:43.112 [2024-12-09 16:03:38.143441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:43.112 [2024-12-09 16:03:38.152140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.112 [2024-12-09 16:03:38.193635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:43.112 Running I/O for 1 seconds... 00:30:43.112 Running I/O for 1 seconds... 00:30:43.369 Running I/O for 1 seconds... 00:30:43.369 Running I/O for 1 seconds... 00:30:44.302 242584.00 IOPS, 947.59 MiB/s 00:30:44.302 Latency(us) 00:30:44.302 [2024-12-09T15:03:39.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.302 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:44.302 Nvme1n1 : 1.00 242219.47 946.17 0.00 0.00 525.69 221.38 1505.77 00:30:44.302 [2024-12-09T15:03:39.530Z] =================================================================================================================== 00:30:44.302 [2024-12-09T15:03:39.530Z] Total : 242219.47 946.17 0.00 0.00 525.69 221.38 1505.77 00:30:44.302 7808.00 IOPS, 30.50 MiB/s 00:30:44.302 Latency(us) 00:30:44.302 [2024-12-09T15:03:39.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.302 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:44.302 Nvme1n1 : 1.02 7821.78 30.55 0.00 0.00 16229.00 3245.59 25340.59 00:30:44.302 [2024-12-09T15:03:39.530Z] =================================================================================================================== 00:30:44.302 [2024-12-09T15:03:39.530Z] Total : 7821.78 30.55 0.00 0.00 16229.00 3245.59 25340.59 00:30:44.302 12158.00 IOPS, 47.49 MiB/s 00:30:44.302 Latency(us) 00:30:44.302 [2024-12-09T15:03:39.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.302 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:44.302 Nvme1n1 : 1.01 12215.96 47.72 0.00 0.00 10444.45 1607.19 15104.49 00:30:44.302 [2024-12-09T15:03:39.530Z] =================================================================================================================== 00:30:44.302 [2024-12-09T15:03:39.530Z] Total : 12215.96 47.72 0.00 0.00 10444.45 1607.19 15104.49 00:30:44.302 7804.00 IOPS, 30.48 MiB/s 00:30:44.302 Latency(us) 00:30:44.302 [2024-12-09T15:03:39.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.302 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:44.302 Nvme1n1 : 1.01 7940.15 31.02 0.00 0.00 16089.56 2777.48 31332.45 00:30:44.302 [2024-12-09T15:03:39.530Z] =================================================================================================================== 00:30:44.302 [2024-12-09T15:03:39.530Z] Total : 7940.15 31.02 0.00 0.00 16089.56 2777.48 31332.45 00:30:44.302 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2201053 00:30:44.302 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2201056 00:30:44.302 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2201060 00:30:44.302 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:44.302 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.302 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.302 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.302 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:44.302 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:44.302 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:44.302 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:44.561 rmmod nvme_tcp 00:30:44.561 rmmod nvme_fabrics 00:30:44.561 rmmod nvme_keyring 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2200948 ']' 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2200948 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2200948 ']' 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2200948 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2200948 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2200948' 00:30:44.561 killing process with pid 2200948 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2200948 00:30:44.561 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2200948 00:30:44.820 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:44.820 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:44.820 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:44.820 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:44.820 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:44.820 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:44.820 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:44.820 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:44.820 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:44.820 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.820 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.820 16:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.725 16:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:46.725 00:30:46.725 real 0m10.785s 00:30:46.725 user 0m14.930s 00:30:46.725 sys 0m6.316s 00:30:46.725 16:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:46.725 16:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:46.725 ************************************ 00:30:46.725 END TEST nvmf_bdev_io_wait 00:30:46.725 ************************************ 00:30:46.725 16:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:46.725 16:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:46.725 16:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:46.725 16:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:46.985 ************************************ 00:30:46.985 START TEST nvmf_queue_depth 00:30:46.985 ************************************ 00:30:46.985 16:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:46.985 * Looking for test storage... 00:30:46.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:46.985 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:46.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.986 --rc genhtml_branch_coverage=1 00:30:46.986 --rc genhtml_function_coverage=1 00:30:46.986 --rc genhtml_legend=1 00:30:46.986 --rc geninfo_all_blocks=1 00:30:46.986 --rc geninfo_unexecuted_blocks=1 00:30:46.986 00:30:46.986 ' 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:46.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.986 --rc genhtml_branch_coverage=1 00:30:46.986 --rc genhtml_function_coverage=1 00:30:46.986 --rc genhtml_legend=1 00:30:46.986 --rc geninfo_all_blocks=1 00:30:46.986 --rc geninfo_unexecuted_blocks=1 00:30:46.986 00:30:46.986 ' 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:46.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.986 --rc genhtml_branch_coverage=1 00:30:46.986 --rc genhtml_function_coverage=1 00:30:46.986 --rc genhtml_legend=1 00:30:46.986 --rc geninfo_all_blocks=1 00:30:46.986 --rc geninfo_unexecuted_blocks=1 00:30:46.986 00:30:46.986 ' 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:46.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.986 --rc genhtml_branch_coverage=1 00:30:46.986 --rc genhtml_function_coverage=1 00:30:46.986 --rc genhtml_legend=1 00:30:46.986 --rc geninfo_all_blocks=1 00:30:46.986 --rc geninfo_unexecuted_blocks=1 00:30:46.986 00:30:46.986 ' 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:46.986 16:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.554 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:53.554 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:53.554 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:53.555 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:53.555 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:53.555 Found net devices under 0000:af:00.0: cvl_0_0 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:53.555 Found net devices under 0000:af:00.1: cvl_0_1 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:53.555 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:53.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:53.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:30:53.555 00:30:53.555 --- 10.0.0.2 ping statistics --- 00:30:53.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.556 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:30:53.556 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:53.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:53.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:30:53.556 00:30:53.556 --- 10.0.0.1 ping statistics --- 00:30:53.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.556 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:30:53.556 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:53.556 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:53.556 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:53.556 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:53.556 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:53.556 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:53.556 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:53.556 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:53.556 16:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2204898 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2204898 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2204898 ']' 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.556 [2024-12-09 16:03:48.091740] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:53.556 [2024-12-09 16:03:48.092627] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:30:53.556 [2024-12-09 16:03:48.092657] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:53.556 [2024-12-09 16:03:48.171714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.556 [2024-12-09 16:03:48.209090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:53.556 [2024-12-09 16:03:48.209120] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:53.556 [2024-12-09 16:03:48.209128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:53.556 [2024-12-09 16:03:48.209134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:53.556 [2024-12-09 16:03:48.209139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:53.556 [2024-12-09 16:03:48.209657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.556 [2024-12-09 16:03:48.275670] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:53.556 [2024-12-09 16:03:48.275886] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.556 [2024-12-09 16:03:48.350372] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.556 Malloc0 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.556 [2024-12-09 16:03:48.422459] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2204947 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2204947 /var/tmp/bdevperf.sock 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2204947 ']' 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:53.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.556 [2024-12-09 16:03:48.472867] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:30:53.556 [2024-12-09 16:03:48.472906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2204947 ] 00:30:53.556 [2024-12-09 16:03:48.547417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.556 [2024-12-09 16:03:48.588620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.556 NVMe0n1 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.556 16:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:53.815 Running I/O for 10 seconds... 00:30:55.685 12288.00 IOPS, 48.00 MiB/s [2024-12-09T15:03:51.848Z] 12291.50 IOPS, 48.01 MiB/s [2024-12-09T15:03:52.923Z] 12292.33 IOPS, 48.02 MiB/s [2024-12-09T15:03:53.859Z] 12374.25 IOPS, 48.34 MiB/s [2024-12-09T15:03:55.234Z] 12465.60 IOPS, 48.69 MiB/s [2024-12-09T15:03:56.167Z] 12466.83 IOPS, 48.70 MiB/s [2024-12-09T15:03:57.101Z] 12497.71 IOPS, 48.82 MiB/s [2024-12-09T15:03:58.035Z] 12505.50 IOPS, 48.85 MiB/s [2024-12-09T15:03:58.971Z] 12516.67 IOPS, 48.89 MiB/s [2024-12-09T15:03:58.971Z] 12517.60 IOPS, 48.90 MiB/s 00:31:03.743 Latency(us) 00:31:03.743 [2024-12-09T15:03:58.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:03.743 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:03.743 Verification LBA range: start 0x0 length 0x4000 00:31:03.743 NVMe0n1 : 10.05 12553.37 49.04 0.00 0.00 81297.78 9237.46 54176.43 00:31:03.743 [2024-12-09T15:03:58.971Z] =================================================================================================================== 00:31:03.743 [2024-12-09T15:03:58.971Z] Total : 12553.37 49.04 0.00 0.00 81297.78 9237.46 54176.43 00:31:03.743 { 00:31:03.743 "results": [ 00:31:03.743 { 00:31:03.743 "job": "NVMe0n1", 00:31:03.743 "core_mask": "0x1", 00:31:03.743 "workload": "verify", 00:31:03.743 "status": "finished", 00:31:03.743 "verify_range": { 00:31:03.743 "start": 0, 00:31:03.743 "length": 16384 00:31:03.743 }, 00:31:03.743 "queue_depth": 1024, 00:31:03.743 "io_size": 4096, 00:31:03.743 "runtime": 10.047979, 00:31:03.743 "iops": 12553.370185188483, 00:31:03.743 "mibps": 49.03660228589251, 00:31:03.743 "io_failed": 0, 00:31:03.743 "io_timeout": 0, 00:31:03.743 "avg_latency_us": 81297.77991042171, 00:31:03.743 "min_latency_us": 9237.455238095237, 00:31:03.743 "max_latency_us": 54176.426666666666 00:31:03.743 } 00:31:03.743 ], 00:31:03.743 "core_count": 1 00:31:03.743 } 00:31:03.743 16:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2204947 00:31:03.743 16:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2204947 ']' 00:31:03.743 16:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2204947 00:31:03.743 16:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:03.743 16:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:03.743 16:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2204947 00:31:04.002 16:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:04.002 16:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:04.002 16:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2204947' 00:31:04.002 killing process with pid 2204947 00:31:04.002 16:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2204947 00:31:04.002 Received shutdown signal, test time was about 10.000000 seconds 00:31:04.002 00:31:04.002 Latency(us) 00:31:04.002 [2024-12-09T15:03:59.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.002 [2024-12-09T15:03:59.230Z] =================================================================================================================== 00:31:04.002 [2024-12-09T15:03:59.230Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:04.002 16:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2204947 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:04.002 rmmod nvme_tcp 00:31:04.002 rmmod nvme_fabrics 00:31:04.002 rmmod nvme_keyring 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2204898 ']' 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2204898 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2204898 ']' 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2204898 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:04.002 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2204898 00:31:04.261 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:04.261 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:04.261 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2204898' 00:31:04.261 killing process with pid 2204898 00:31:04.261 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2204898 00:31:04.261 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2204898 00:31:04.261 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:04.261 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:04.261 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:04.261 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:04.261 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:04.261 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:04.261 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:04.261 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:04.261 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:04.261 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.261 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.261 16:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.797 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:06.797 00:31:06.797 real 0m19.566s 00:31:06.797 user 0m22.560s 00:31:06.797 sys 0m6.194s 00:31:06.797 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:06.797 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:06.797 ************************************ 00:31:06.797 END TEST nvmf_queue_depth 00:31:06.797 ************************************ 00:31:06.797 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:06.798 ************************************ 00:31:06.798 START TEST nvmf_target_multipath 00:31:06.798 ************************************ 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:06.798 * Looking for test storage... 00:31:06.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:06.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.798 --rc genhtml_branch_coverage=1 00:31:06.798 --rc genhtml_function_coverage=1 00:31:06.798 --rc genhtml_legend=1 00:31:06.798 --rc geninfo_all_blocks=1 00:31:06.798 --rc geninfo_unexecuted_blocks=1 00:31:06.798 00:31:06.798 ' 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:06.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.798 --rc genhtml_branch_coverage=1 00:31:06.798 --rc genhtml_function_coverage=1 00:31:06.798 --rc genhtml_legend=1 00:31:06.798 --rc geninfo_all_blocks=1 00:31:06.798 --rc geninfo_unexecuted_blocks=1 00:31:06.798 00:31:06.798 ' 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:06.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.798 --rc genhtml_branch_coverage=1 00:31:06.798 --rc genhtml_function_coverage=1 00:31:06.798 --rc genhtml_legend=1 00:31:06.798 --rc geninfo_all_blocks=1 00:31:06.798 --rc geninfo_unexecuted_blocks=1 00:31:06.798 00:31:06.798 ' 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:06.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.798 --rc genhtml_branch_coverage=1 00:31:06.798 --rc genhtml_function_coverage=1 00:31:06.798 --rc genhtml_legend=1 00:31:06.798 --rc geninfo_all_blocks=1 00:31:06.798 --rc geninfo_unexecuted_blocks=1 00:31:06.798 00:31:06.798 ' 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.798 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:06.799 16:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:13.369 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:13.370 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:13.370 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:13.370 Found net devices under 0000:af:00.0: cvl_0_0 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:13.370 Found net devices under 0000:af:00.1: cvl_0_1 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:13.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:31:13.370 00:31:13.370 --- 10.0.0.2 ping statistics --- 00:31:13.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.370 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:31:13.370 00:31:13.370 --- 10.0.0.1 ping statistics --- 00:31:13.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.370 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:13.370 only one NIC for nvmf test 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.370 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.370 rmmod nvme_tcp 00:31:13.370 rmmod nvme_fabrics 00:31:13.370 rmmod nvme_keyring 00:31:13.371 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:13.371 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:13.371 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:13.371 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:13.371 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:13.371 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:13.371 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:13.371 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:13.371 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:13.371 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:13.371 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:13.371 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:13.371 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:13.371 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.371 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.371 16:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:14.751 00:31:14.751 real 0m8.237s 00:31:14.751 user 0m1.825s 00:31:14.751 sys 0m4.430s 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:14.751 ************************************ 00:31:14.751 END TEST nvmf_target_multipath 00:31:14.751 ************************************ 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:14.751 ************************************ 00:31:14.751 START TEST nvmf_zcopy 00:31:14.751 ************************************ 00:31:14.751 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:14.751 * Looking for test storage... 00:31:15.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:15.011 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:15.011 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:31:15.011 16:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:15.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.011 --rc genhtml_branch_coverage=1 00:31:15.011 --rc genhtml_function_coverage=1 00:31:15.011 --rc genhtml_legend=1 00:31:15.011 --rc geninfo_all_blocks=1 00:31:15.011 --rc geninfo_unexecuted_blocks=1 00:31:15.011 00:31:15.011 ' 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:15.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.011 --rc genhtml_branch_coverage=1 00:31:15.011 --rc genhtml_function_coverage=1 00:31:15.011 --rc genhtml_legend=1 00:31:15.011 --rc geninfo_all_blocks=1 00:31:15.011 --rc geninfo_unexecuted_blocks=1 00:31:15.011 00:31:15.011 ' 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:15.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.011 --rc genhtml_branch_coverage=1 00:31:15.011 --rc genhtml_function_coverage=1 00:31:15.011 --rc genhtml_legend=1 00:31:15.011 --rc geninfo_all_blocks=1 00:31:15.011 --rc geninfo_unexecuted_blocks=1 00:31:15.011 00:31:15.011 ' 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:15.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.011 --rc genhtml_branch_coverage=1 00:31:15.011 --rc genhtml_function_coverage=1 00:31:15.011 --rc genhtml_legend=1 00:31:15.011 --rc geninfo_all_blocks=1 00:31:15.011 --rc geninfo_unexecuted_blocks=1 00:31:15.011 00:31:15.011 ' 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.011 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:15.012 16:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:21.583 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:21.584 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:21.584 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:21.584 Found net devices under 0000:af:00.0: cvl_0_0 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:21.584 Found net devices under 0000:af:00.1: cvl_0_1 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:21.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:31:21.584 00:31:21.584 --- 10.0.0.2 ping statistics --- 00:31:21.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.584 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:31:21.584 00:31:21.584 --- 10.0.0.1 ping statistics --- 00:31:21.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.584 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2213514 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2213514 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2213514 ']' 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:21.584 16:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.584 [2024-12-09 16:04:15.952584] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:21.584 [2024-12-09 16:04:15.953534] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:31:21.585 [2024-12-09 16:04:15.953572] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.585 [2024-12-09 16:04:16.032697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.585 [2024-12-09 16:04:16.070920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.585 [2024-12-09 16:04:16.070956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.585 [2024-12-09 16:04:16.070963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.585 [2024-12-09 16:04:16.070972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.585 [2024-12-09 16:04:16.070977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.585 [2024-12-09 16:04:16.071516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.585 [2024-12-09 16:04:16.138827] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:21.585 [2024-12-09 16:04:16.139047] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.585 [2024-12-09 16:04:16.212182] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.585 [2024-12-09 16:04:16.240428] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.585 malloc0 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:21.585 { 00:31:21.585 "params": { 00:31:21.585 "name": "Nvme$subsystem", 00:31:21.585 "trtype": "$TEST_TRANSPORT", 00:31:21.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:21.585 "adrfam": "ipv4", 00:31:21.585 "trsvcid": "$NVMF_PORT", 00:31:21.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:21.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:21.585 "hdgst": ${hdgst:-false}, 00:31:21.585 "ddgst": ${ddgst:-false} 00:31:21.585 }, 00:31:21.585 "method": "bdev_nvme_attach_controller" 00:31:21.585 } 00:31:21.585 EOF 00:31:21.585 )") 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:21.585 16:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:21.585 "params": { 00:31:21.585 "name": "Nvme1", 00:31:21.585 "trtype": "tcp", 00:31:21.585 "traddr": "10.0.0.2", 00:31:21.585 "adrfam": "ipv4", 00:31:21.585 "trsvcid": "4420", 00:31:21.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:21.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:21.585 "hdgst": false, 00:31:21.585 "ddgst": false 00:31:21.585 }, 00:31:21.585 "method": "bdev_nvme_attach_controller" 00:31:21.585 }' 00:31:21.585 [2024-12-09 16:04:16.338522] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:31:21.585 [2024-12-09 16:04:16.338572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213539 ] 00:31:21.585 [2024-12-09 16:04:16.415163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.585 [2024-12-09 16:04:16.454295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.585 Running I/O for 10 seconds... 00:31:23.458 8582.00 IOPS, 67.05 MiB/s [2024-12-09T15:04:20.063Z] 8649.00 IOPS, 67.57 MiB/s [2024-12-09T15:04:20.999Z] 8682.33 IOPS, 67.83 MiB/s [2024-12-09T15:04:21.935Z] 8695.25 IOPS, 67.93 MiB/s [2024-12-09T15:04:22.871Z] 8714.40 IOPS, 68.08 MiB/s [2024-12-09T15:04:23.808Z] 8722.83 IOPS, 68.15 MiB/s [2024-12-09T15:04:24.744Z] 8733.14 IOPS, 68.23 MiB/s [2024-12-09T15:04:25.678Z] 8738.00 IOPS, 68.27 MiB/s [2024-12-09T15:04:27.056Z] 8733.89 IOPS, 68.23 MiB/s [2024-12-09T15:04:27.056Z] 8735.00 IOPS, 68.24 MiB/s 00:31:31.828 Latency(us) 00:31:31.828 [2024-12-09T15:04:27.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:31.828 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:31.828 Verification LBA range: start 0x0 length 0x1000 00:31:31.828 Nvme1n1 : 10.05 8702.68 67.99 0.00 0.00 14615.26 2200.14 44189.99 00:31:31.828 [2024-12-09T15:04:27.056Z] =================================================================================================================== 00:31:31.828 [2024-12-09T15:04:27.056Z] Total : 8702.68 67.99 0.00 0.00 14615.26 2200.14 44189.99 00:31:31.828 16:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2215232 00:31:31.828 16:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:31.828 16:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:31.828 16:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:31.828 16:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:31.828 16:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:31.828 16:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:31.828 16:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:31.828 16:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:31.828 { 00:31:31.828 "params": { 00:31:31.828 "name": "Nvme$subsystem", 00:31:31.828 "trtype": "$TEST_TRANSPORT", 00:31:31.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:31.828 "adrfam": "ipv4", 00:31:31.828 "trsvcid": "$NVMF_PORT", 00:31:31.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:31.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:31.828 "hdgst": ${hdgst:-false}, 00:31:31.828 "ddgst": ${ddgst:-false} 00:31:31.828 }, 00:31:31.828 "method": "bdev_nvme_attach_controller" 00:31:31.828 } 00:31:31.828 EOF 00:31:31.828 )") 00:31:31.828 [2024-12-09 16:04:26.883858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.828 [2024-12-09 16:04:26.883892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.828 16:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:31.828 16:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:31.828 16:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:31.828 16:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:31.828 "params": { 00:31:31.828 "name": "Nvme1", 00:31:31.828 "trtype": "tcp", 00:31:31.828 "traddr": "10.0.0.2", 00:31:31.828 "adrfam": "ipv4", 00:31:31.828 "trsvcid": "4420", 00:31:31.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:31.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:31.828 "hdgst": false, 00:31:31.828 "ddgst": false 00:31:31.828 }, 00:31:31.828 "method": "bdev_nvme_attach_controller" 00:31:31.828 }' 00:31:31.828 [2024-12-09 16:04:26.895818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.828 [2024-12-09 16:04:26.895831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.828 [2024-12-09 16:04:26.903812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.828 [2024-12-09 16:04:26.903823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.828 [2024-12-09 16:04:26.915815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.828 [2024-12-09 16:04:26.915825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.828 [2024-12-09 16:04:26.922279] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:31:31.828 [2024-12-09 16:04:26.922319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2215232 ] 00:31:31.828 [2024-12-09 16:04:26.927812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.828 [2024-12-09 16:04:26.927822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.828 [2024-12-09 16:04:26.939811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.828 [2024-12-09 16:04:26.939821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.828 [2024-12-09 16:04:26.951812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.828 [2024-12-09 16:04:26.951822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.828 [2024-12-09 16:04:26.963811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.828 [2024-12-09 16:04:26.963825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.828 [2024-12-09 16:04:26.975812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.828 [2024-12-09 16:04:26.975822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.828 [2024-12-09 16:04:26.987813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.828 [2024-12-09 16:04:26.987823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.828 [2024-12-09 16:04:26.995303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.828 [2024-12-09 16:04:26.999812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.828 [2024-12-09 16:04:26.999822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.828 [2024-12-09 16:04:27.011814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.828 [2024-12-09 16:04:27.011827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.828 [2024-12-09 16:04:27.023810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.828 [2024-12-09 16:04:27.023820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.828 [2024-12-09 16:04:27.035771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.828 [2024-12-09 16:04:27.035814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.828 [2024-12-09 16:04:27.035824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.828 [2024-12-09 16:04:27.047820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.828 [2024-12-09 16:04:27.047834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.087 [2024-12-09 16:04:27.059841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.088 [2024-12-09 16:04:27.059866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.088 [2024-12-09 16:04:27.071821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.088 [2024-12-09 16:04:27.071834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.088 [2024-12-09 16:04:27.083813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.088 [2024-12-09 16:04:27.083826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.088 [2024-12-09 16:04:27.095816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.088 [2024-12-09 16:04:27.095828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.088 [2024-12-09 16:04:27.107824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.088 [2024-12-09 16:04:27.107840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.088 [2024-12-09 16:04:27.119876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.088 [2024-12-09 16:04:27.119894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.088 [2024-12-09 16:04:27.131823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.088 [2024-12-09 16:04:27.131840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.088 [2024-12-09 16:04:27.143818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.088 [2024-12-09 16:04:27.143830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.088 [2024-12-09 16:04:27.155821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.088 [2024-12-09 16:04:27.155837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.088 [2024-12-09 16:04:27.167819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.088 [2024-12-09 16:04:27.167834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.088 [2024-12-09 16:04:27.216506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.088 [2024-12-09 16:04:27.216527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.088 [2024-12-09 16:04:27.227997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.088 [2024-12-09 16:04:27.228012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.088 Running I/O for 5 seconds... 00:31:32.088 [2024-12-09 16:04:27.243131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.088 [2024-12-09 16:04:27.243151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.088 [2024-12-09 16:04:27.257644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.088 [2024-12-09 16:04:27.257663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.088 [2024-12-09 16:04:27.272294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.088 [2024-12-09 16:04:27.272311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.088 [2024-12-09 16:04:27.287483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.088 [2024-12-09 16:04:27.287501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.088 [2024-12-09 16:04:27.301850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.088 [2024-12-09 16:04:27.301868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.316952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.316971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.332001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.332021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.344272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.344290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.357916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.357934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.372626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.372643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.387971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.387990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.395485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.395502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.408940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.408958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.424146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.424164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.440422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.440444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.456055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.456074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.466168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.466186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.480985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.481009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.495463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.495491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.509187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.509204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.523913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.523931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.534977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.534994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.549680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.549698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.347 [2024-12-09 16:04:27.564124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.347 [2024-12-09 16:04:27.564141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.580382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.580401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.595484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.595512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.607515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.607532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.621699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.621717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.636268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.636286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.651156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.651174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.666105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.666123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.680548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.680566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.695315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.695338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.709015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.709033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.723623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.723642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.734690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.734708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.749377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.749406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.763863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.763882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.777043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.777061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.788094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.788112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.801747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.801767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.816527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.816545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.607 [2024-12-09 16:04:27.832030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.607 [2024-12-09 16:04:27.832048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:27.845641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:27.845660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:27.860326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:27.860344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:27.875662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:27.875680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:27.889563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:27.889581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:27.904094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:27.904112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:27.919858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:27.919877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:27.931465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:27.931485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:27.945994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:27.946012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:27.960449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:27.960466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:27.975386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:27.975404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:27.989820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:27.989838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:28.004661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:28.004678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:28.019731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:28.019749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:28.031812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:28.031830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:28.045585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:28.045603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:28.060204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:28.060228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:28.071785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:28.071803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.866 [2024-12-09 16:04:28.085573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.866 [2024-12-09 16:04:28.085592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.125 [2024-12-09 16:04:28.100686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.125 [2024-12-09 16:04:28.100706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.125 [2024-12-09 16:04:28.115478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.125 [2024-12-09 16:04:28.115497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.125 [2024-12-09 16:04:28.126494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.125 [2024-12-09 16:04:28.126513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.125 [2024-12-09 16:04:28.141229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.125 [2024-12-09 16:04:28.141264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.125 [2024-12-09 16:04:28.155905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.125 [2024-12-09 16:04:28.155924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.125 [2024-12-09 16:04:28.167033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.125 [2024-12-09 16:04:28.167053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.125 [2024-12-09 16:04:28.181868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.125 [2024-12-09 16:04:28.181887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.125 [2024-12-09 16:04:28.196461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.125 [2024-12-09 16:04:28.196480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.125 [2024-12-09 16:04:28.211502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.125 [2024-12-09 16:04:28.211520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.126 [2024-12-09 16:04:28.224516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.126 [2024-12-09 16:04:28.224534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.126 16776.00 IOPS, 131.06 MiB/s [2024-12-09T15:04:28.354Z] [2024-12-09 16:04:28.240317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.126 [2024-12-09 16:04:28.240335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.126 [2024-12-09 16:04:28.255730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.126 [2024-12-09 16:04:28.255748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.126 [2024-12-09 16:04:28.269367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.126 [2024-12-09 16:04:28.269390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.126 [2024-12-09 16:04:28.283875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.126 [2024-12-09 16:04:28.283893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.126 [2024-12-09 16:04:28.294884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.126 [2024-12-09 16:04:28.294902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.126 [2024-12-09 16:04:28.309548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.126 [2024-12-09 16:04:28.309567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.126 [2024-12-09 16:04:28.323882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.126 [2024-12-09 16:04:28.323900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.126 [2024-12-09 16:04:28.336730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.126 [2024-12-09 16:04:28.336748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.126 [2024-12-09 16:04:28.351843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.126 [2024-12-09 16:04:28.351867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.384 [2024-12-09 16:04:28.365198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.365224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.385 [2024-12-09 16:04:28.379935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.379954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.385 [2024-12-09 16:04:28.393587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.393605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.385 [2024-12-09 16:04:28.407871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.407889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.385 [2024-12-09 16:04:28.420568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.420586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.385 [2024-12-09 16:04:28.433767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.433784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.385 [2024-12-09 16:04:28.448680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.448697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.385 [2024-12-09 16:04:28.464038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.464057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.385 [2024-12-09 16:04:28.474681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.474699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.385 [2024-12-09 16:04:28.489035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.489053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.385 [2024-12-09 16:04:28.503491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.503510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.385 [2024-12-09 16:04:28.517860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.517877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.385 [2024-12-09 16:04:28.532845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.532867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.385 [2024-12-09 16:04:28.547472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.547490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.385 [2024-12-09 16:04:28.558503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.558520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.385 [2024-12-09 16:04:28.573265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.573282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.385 [2024-12-09 16:04:28.587898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.587916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.385 [2024-12-09 16:04:28.599924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.385 [2024-12-09 16:04:28.599941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.614152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.614170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.628833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.628852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.643925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.643942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.656767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.656785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.671927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.671944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.685727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.685746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.700587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.700605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.715559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.715577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.728548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.728565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.741442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.741460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.756248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.756266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.771368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.771386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.785581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.785603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.799959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.799982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.812061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.812079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.825358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.825376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.840166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.840184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.852681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.852699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.644 [2024-12-09 16:04:28.867865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.644 [2024-12-09 16:04:28.867883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:28.880433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:28.880452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:28.895383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:28.895402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:28.909885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:28.909904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:28.924103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:28.924121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:28.939442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:28.939461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:28.953635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:28.953654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:28.968281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:28.968300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:28.984004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:28.984021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:28.996394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:28.996412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:29.009817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:29.009835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:29.024065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:29.024083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:29.037040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:29.037057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:29.052193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:29.052210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:29.067590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:29.067608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:29.081869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:29.081886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:29.096947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:29.096965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:29.111520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:29.111538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.903 [2024-12-09 16:04:29.124863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.903 [2024-12-09 16:04:29.124881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.140420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.140438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.155173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.155192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.169565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.169583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.184246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.184263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.199581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.199599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.213269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.213287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.227912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.227930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 16844.50 IOPS, 131.60 MiB/s [2024-12-09T15:04:29.404Z] [2024-12-09 16:04:29.240244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.240262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.253811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.253828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.268444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.268462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.281571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.281589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.296546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.296564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.311397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.311414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.325619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.325637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.339980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.339997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.351120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.351138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.365092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.365109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.379964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.379982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.176 [2024-12-09 16:04:29.391077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.176 [2024-12-09 16:04:29.391093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.406169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.406187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.420679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.420698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.435975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.435993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.449409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.449426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.464190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.464207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.480035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.480053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.492041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.492059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.505949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.505967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.520410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.520428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.535645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.535664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.549080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.549098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.563501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.563519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.576850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.576868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.589254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.589277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.603869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.603887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.616718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.616736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.632357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.632375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.647754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.647772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.435 [2024-12-09 16:04:29.660931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.435 [2024-12-09 16:04:29.660949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.694 [2024-12-09 16:04:29.675826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.694 [2024-12-09 16:04:29.675845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.694 [2024-12-09 16:04:29.689249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.694 [2024-12-09 16:04:29.689267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.694 [2024-12-09 16:04:29.703690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.694 [2024-12-09 16:04:29.703709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.694 [2024-12-09 16:04:29.717902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.694 [2024-12-09 16:04:29.717920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.694 [2024-12-09 16:04:29.732655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.694 [2024-12-09 16:04:29.732673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.694 [2024-12-09 16:04:29.747866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.694 [2024-12-09 16:04:29.747885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.694 [2024-12-09 16:04:29.762323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.694 [2024-12-09 16:04:29.762341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.694 [2024-12-09 16:04:29.777271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.694 [2024-12-09 16:04:29.777290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.694 [2024-12-09 16:04:29.791757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.694 [2024-12-09 16:04:29.791776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.694 [2024-12-09 16:04:29.804588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.694 [2024-12-09 16:04:29.804606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.694 [2024-12-09 16:04:29.819499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.694 [2024-12-09 16:04:29.819517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.694 [2024-12-09 16:04:29.833961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.694 [2024-12-09 16:04:29.833979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.694 [2024-12-09 16:04:29.848777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.694 [2024-12-09 16:04:29.848796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.694 [2024-12-09 16:04:29.864271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-12-09 16:04:29.864293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-12-09 16:04:29.876481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-12-09 16:04:29.876498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-12-09 16:04:29.891365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-12-09 16:04:29.891383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-12-09 16:04:29.904680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-12-09 16:04:29.904698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-12-09 16:04:29.919654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-12-09 16:04:29.919674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:29.931009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:29.931028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:29.945727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:29.945746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:29.960034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:29.960052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:29.973443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:29.973462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:29.987932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:29.987950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:29.998776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:29.998793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:30.013998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:30.014018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:30.029156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:30.029175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:30.043732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:30.043751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:30.055868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:30.055887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:30.069044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:30.069063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:30.083698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:30.083717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:30.096661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:30.096679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:30.111609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:30.111628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:30.124865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:30.124889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:30.140036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:30.140054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:30.152679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:30.152697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:30.167498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:30.167515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.953 [2024-12-09 16:04:30.180455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.953 [2024-12-09 16:04:30.180475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 [2024-12-09 16:04:30.194169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.194188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 [2024-12-09 16:04:30.209099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.209117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 [2024-12-09 16:04:30.223534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.223552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 [2024-12-09 16:04:30.236944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.236961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 16833.00 IOPS, 131.51 MiB/s [2024-12-09T15:04:30.440Z] [2024-12-09 16:04:30.252438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.252455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 [2024-12-09 16:04:30.263483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.263501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 [2024-12-09 16:04:30.278001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.278019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 [2024-12-09 16:04:30.292665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.292683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 [2024-12-09 16:04:30.308059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.308077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 [2024-12-09 16:04:30.320697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.320715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 [2024-12-09 16:04:30.336211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.336233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 [2024-12-09 16:04:30.351396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.351413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 [2024-12-09 16:04:30.365745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.365763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 [2024-12-09 16:04:30.380433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.380450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 [2024-12-09 16:04:30.396007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.396024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 [2024-12-09 16:04:30.409675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.409693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 [2024-12-09 16:04:30.424359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.424376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.212 [2024-12-09 16:04:30.439652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.212 [2024-12-09 16:04:30.439670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.471 [2024-12-09 16:04:30.453595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.471 [2024-12-09 16:04:30.453614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.471 [2024-12-09 16:04:30.468040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.471 [2024-12-09 16:04:30.468058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.471 [2024-12-09 16:04:30.479910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.471 [2024-12-09 16:04:30.479927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.471 [2024-12-09 16:04:30.493467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.471 [2024-12-09 16:04:30.493484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.471 [2024-12-09 16:04:30.508191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.471 [2024-12-09 16:04:30.508207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.471 [2024-12-09 16:04:30.523704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.471 [2024-12-09 16:04:30.523722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.471 [2024-12-09 16:04:30.537746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.471 [2024-12-09 16:04:30.537764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.471 [2024-12-09 16:04:30.552329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.471 [2024-12-09 16:04:30.552355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.472 [2024-12-09 16:04:30.568189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.472 [2024-12-09 16:04:30.568206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.472 [2024-12-09 16:04:30.584117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.472 [2024-12-09 16:04:30.584134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.472 [2024-12-09 16:04:30.599375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.472 [2024-12-09 16:04:30.599393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.472 [2024-12-09 16:04:30.613807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.472 [2024-12-09 16:04:30.613824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.472 [2024-12-09 16:04:30.628490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.472 [2024-12-09 16:04:30.628507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.472 [2024-12-09 16:04:30.644300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.472 [2024-12-09 16:04:30.644317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.472 [2024-12-09 16:04:30.659919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.472 [2024-12-09 16:04:30.659936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.472 [2024-12-09 16:04:30.673441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.472 [2024-12-09 16:04:30.673458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.472 [2024-12-09 16:04:30.688093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.472 [2024-12-09 16:04:30.688109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.699401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.699420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.713123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.713141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.727602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.727619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.740324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.740341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.752941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.752958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.768036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.768053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.780202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.780224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.793779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.793796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.808470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.808487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.820067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.820084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.834027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.834045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.848521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.848539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.863800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.863817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.877358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.877375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.888311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.888328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.901643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.901660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.916236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.916253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.929622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.929640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.943921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.943939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.731 [2024-12-09 16:04:30.956914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.731 [2024-12-09 16:04:30.956934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.990 [2024-12-09 16:04:30.967257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.990 [2024-12-09 16:04:30.967276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.990 [2024-12-09 16:04:30.981564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.990 [2024-12-09 16:04:30.981584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.990 [2024-12-09 16:04:30.996224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.990 [2024-12-09 16:04:30.996244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.990 [2024-12-09 16:04:31.011630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.990 [2024-12-09 16:04:31.011648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.990 [2024-12-09 16:04:31.025071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.990 [2024-12-09 16:04:31.025092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.990 [2024-12-09 16:04:31.036618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.990 [2024-12-09 16:04:31.036637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.990 [2024-12-09 16:04:31.051983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.990 [2024-12-09 16:04:31.052002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.990 [2024-12-09 16:04:31.062589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.990 [2024-12-09 16:04:31.062609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.990 [2024-12-09 16:04:31.077185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.990 [2024-12-09 16:04:31.077203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.990 [2024-12-09 16:04:31.091920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.990 [2024-12-09 16:04:31.091938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.990 [2024-12-09 16:04:31.102614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.990 [2024-12-09 16:04:31.102632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.990 [2024-12-09 16:04:31.117282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.990 [2024-12-09 16:04:31.117301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.990 [2024-12-09 16:04:31.131756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.990 [2024-12-09 16:04:31.131774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.990 [2024-12-09 16:04:31.144309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.990 [2024-12-09 16:04:31.144327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.990 [2024-12-09 16:04:31.157959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.990 [2024-12-09 16:04:31.157978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.990 [2024-12-09 16:04:31.172432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.991 [2024-12-09 16:04:31.172449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.991 [2024-12-09 16:04:31.187629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.991 [2024-12-09 16:04:31.187648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.991 [2024-12-09 16:04:31.201271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.991 [2024-12-09 16:04:31.201289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.991 [2024-12-09 16:04:31.216191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.991 [2024-12-09 16:04:31.216226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.249 [2024-12-09 16:04:31.231070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.231089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.250 16841.25 IOPS, 131.57 MiB/s [2024-12-09T15:04:31.478Z] [2024-12-09 16:04:31.245629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.245647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.250 [2024-12-09 16:04:31.260534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.260551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.250 [2024-12-09 16:04:31.275555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.275574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.250 [2024-12-09 16:04:31.289329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.289347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.250 [2024-12-09 16:04:31.303831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.303849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.250 [2024-12-09 16:04:31.314755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.314772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.250 [2024-12-09 16:04:31.329490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.329509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.250 [2024-12-09 16:04:31.343549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.343568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.250 [2024-12-09 16:04:31.356291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.356309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.250 [2024-12-09 16:04:31.369439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.369457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.250 [2024-12-09 16:04:31.384226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.384244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.250 [2024-12-09 16:04:31.399892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.399913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.250 [2024-12-09 16:04:31.411225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.411243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.250 [2024-12-09 16:04:31.425500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.425518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.250 [2024-12-09 16:04:31.439907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.439929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.250 [2024-12-09 16:04:31.453427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.453445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.250 [2024-12-09 16:04:31.468101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.250 [2024-12-09 16:04:31.468117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.483614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.483632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.497421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.497439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.511897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.511914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.522964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.522981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.537558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.537576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.552013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.552030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.565243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.565261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.580016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.580033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.590696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.590714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.605399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.605417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.619336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.619354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.633543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.633560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.648201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.648227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.663374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.663391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.677329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.677346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.691965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.691983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.703013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.703036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.717425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.717443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.509 [2024-12-09 16:04:31.731549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.509 [2024-12-09 16:04:31.731567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.768 [2024-12-09 16:04:31.744761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.768 [2024-12-09 16:04:31.744779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.768 [2024-12-09 16:04:31.759949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.768 [2024-12-09 16:04:31.759967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.768 [2024-12-09 16:04:31.772737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.768 [2024-12-09 16:04:31.772755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.768 [2024-12-09 16:04:31.785593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.768 [2024-12-09 16:04:31.785612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.768 [2024-12-09 16:04:31.799959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.768 [2024-12-09 16:04:31.799977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.768 [2024-12-09 16:04:31.810876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.768 [2024-12-09 16:04:31.810895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.768 [2024-12-09 16:04:31.825565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.768 [2024-12-09 16:04:31.825583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.768 [2024-12-09 16:04:31.839816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.768 [2024-12-09 16:04:31.839834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.768 [2024-12-09 16:04:31.850251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.768 [2024-12-09 16:04:31.850268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.768 [2024-12-09 16:04:31.864826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.768 [2024-12-09 16:04:31.864844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.768 [2024-12-09 16:04:31.879504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.768 [2024-12-09 16:04:31.879522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.768 [2024-12-09 16:04:31.892997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.768 [2024-12-09 16:04:31.893014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.768 [2024-12-09 16:04:31.907835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.768 [2024-12-09 16:04:31.907853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.768 [2024-12-09 16:04:31.922118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.769 [2024-12-09 16:04:31.922136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.769 [2024-12-09 16:04:31.936722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.769 [2024-12-09 16:04:31.936741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.769 [2024-12-09 16:04:31.951515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.769 [2024-12-09 16:04:31.951532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.769 [2024-12-09 16:04:31.965823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.769 [2024-12-09 16:04:31.965845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.769 [2024-12-09 16:04:31.980263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.769 [2024-12-09 16:04:31.980280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.769 [2024-12-09 16:04:31.995564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.769 [2024-12-09 16:04:31.995583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.027 [2024-12-09 16:04:32.009210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.027 [2024-12-09 16:04:32.009233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.027 [2024-12-09 16:04:32.024145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.027 [2024-12-09 16:04:32.024163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.027 [2024-12-09 16:04:32.039996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.027 [2024-12-09 16:04:32.040014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.027 [2024-12-09 16:04:32.052725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.027 [2024-12-09 16:04:32.052743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.027 [2024-12-09 16:04:32.068185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.028 [2024-12-09 16:04:32.068203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.028 [2024-12-09 16:04:32.083252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.028 [2024-12-09 16:04:32.083270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.028 [2024-12-09 16:04:32.098087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.028 [2024-12-09 16:04:32.098105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.028 [2024-12-09 16:04:32.112625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.028 [2024-12-09 16:04:32.112642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.028 [2024-12-09 16:04:32.127561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.028 [2024-12-09 16:04:32.127580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.028 [2024-12-09 16:04:32.140613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.028 [2024-12-09 16:04:32.140631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.028 [2024-12-09 16:04:32.156100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.028 [2024-12-09 16:04:32.156118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.028 [2024-12-09 16:04:32.169381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.028 [2024-12-09 16:04:32.169399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.028 [2024-12-09 16:04:32.184256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.028 [2024-12-09 16:04:32.184273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.028 [2024-12-09 16:04:32.199486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.028 [2024-12-09 16:04:32.199505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.028 [2024-12-09 16:04:32.213588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.028 [2024-12-09 16:04:32.213606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.028 [2024-12-09 16:04:32.228033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.028 [2024-12-09 16:04:32.228051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.028 [2024-12-09 16:04:32.241394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.028 [2024-12-09 16:04:32.241411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.028 16860.60 IOPS, 131.72 MiB/s 00:31:37.028 Latency(us) 00:31:37.028 [2024-12-09T15:04:32.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.028 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:37.028 Nvme1n1 : 5.01 16865.49 131.76 0.00 0.00 7583.05 2137.72 12857.54 00:31:37.028 [2024-12-09T15:04:32.256Z] =================================================================================================================== 00:31:37.028 [2024-12-09T15:04:32.256Z] Total : 16865.49 131.76 0.00 0.00 7583.05 2137.72 12857.54 00:31:37.028 [2024-12-09 16:04:32.251821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.028 [2024-12-09 16:04:32.251839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.287 [2024-12-09 16:04:32.263818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.287 [2024-12-09 16:04:32.263834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.287 [2024-12-09 16:04:32.275829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.287 [2024-12-09 16:04:32.275850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.287 [2024-12-09 16:04:32.287821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.287 [2024-12-09 16:04:32.287838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.287 [2024-12-09 16:04:32.299817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.287 [2024-12-09 16:04:32.299831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.287 [2024-12-09 16:04:32.311819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.287 [2024-12-09 16:04:32.311833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.287 [2024-12-09 16:04:32.323816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.287 [2024-12-09 16:04:32.323832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.287 [2024-12-09 16:04:32.335814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.287 [2024-12-09 16:04:32.335827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.287 [2024-12-09 16:04:32.347812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.287 [2024-12-09 16:04:32.347826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.287 [2024-12-09 16:04:32.359812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.287 [2024-12-09 16:04:32.359823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.287 [2024-12-09 16:04:32.371811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.287 [2024-12-09 16:04:32.371819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.287 [2024-12-09 16:04:32.383816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.287 [2024-12-09 16:04:32.383830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.287 [2024-12-09 16:04:32.395811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.287 [2024-12-09 16:04:32.395820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.287 [2024-12-09 16:04:32.407812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.287 [2024-12-09 16:04:32.407822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2215232) - No such process 00:31:37.287 16:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2215232 00:31:37.287 16:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:37.287 16:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.287 16:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:37.287 16:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.287 16:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:37.287 16:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.287 16:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:37.287 delay0 00:31:37.287 16:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.287 16:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:37.287 16:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.287 16:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:37.287 16:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.287 16:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:37.547 [2024-12-09 16:04:32.520399] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:45.671 Initializing NVMe Controllers 00:31:45.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:45.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:45.671 Initialization complete. Launching workers. 00:31:45.671 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 292, failed: 13440 00:31:45.671 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 13654, failed to submit 78 00:31:45.671 success 13574, unsuccessful 80, failed 0 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:45.671 rmmod nvme_tcp 00:31:45.671 rmmod nvme_fabrics 00:31:45.671 rmmod nvme_keyring 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2213514 ']' 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2213514 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2213514 ']' 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2213514 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2213514 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2213514' 00:31:45.671 killing process with pid 2213514 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2213514 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2213514 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.671 16:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:47.054 00:31:47.054 real 0m32.149s 00:31:47.054 user 0m41.653s 00:31:47.054 sys 0m12.848s 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:47.054 ************************************ 00:31:47.054 END TEST nvmf_zcopy 00:31:47.054 ************************************ 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:47.054 ************************************ 00:31:47.054 START TEST nvmf_nmic 00:31:47.054 ************************************ 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:47.054 * Looking for test storage... 00:31:47.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:47.054 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:47.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.314 --rc genhtml_branch_coverage=1 00:31:47.314 --rc genhtml_function_coverage=1 00:31:47.314 --rc genhtml_legend=1 00:31:47.314 --rc geninfo_all_blocks=1 00:31:47.314 --rc geninfo_unexecuted_blocks=1 00:31:47.314 00:31:47.314 ' 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:47.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.314 --rc genhtml_branch_coverage=1 00:31:47.314 --rc genhtml_function_coverage=1 00:31:47.314 --rc genhtml_legend=1 00:31:47.314 --rc geninfo_all_blocks=1 00:31:47.314 --rc geninfo_unexecuted_blocks=1 00:31:47.314 00:31:47.314 ' 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:47.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.314 --rc genhtml_branch_coverage=1 00:31:47.314 --rc genhtml_function_coverage=1 00:31:47.314 --rc genhtml_legend=1 00:31:47.314 --rc geninfo_all_blocks=1 00:31:47.314 --rc geninfo_unexecuted_blocks=1 00:31:47.314 00:31:47.314 ' 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:47.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.314 --rc genhtml_branch_coverage=1 00:31:47.314 --rc genhtml_function_coverage=1 00:31:47.314 --rc genhtml_legend=1 00:31:47.314 --rc geninfo_all_blocks=1 00:31:47.314 --rc geninfo_unexecuted_blocks=1 00:31:47.314 00:31:47.314 ' 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.314 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:47.315 16:04:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:53.886 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:53.886 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:53.886 Found net devices under 0000:af:00.0: cvl_0_0 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:53.886 Found net devices under 0000:af:00.1: cvl_0_1 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.886 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:53.887 16:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:53.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:53.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:31:53.887 00:31:53.887 --- 10.0.0.2 ping statistics --- 00:31:53.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.887 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:53.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:53.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:31:53.887 00:31:53.887 --- 10.0.0.1 ping statistics --- 00:31:53.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.887 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2220657 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2220657 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2220657 ']' 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.887 [2024-12-09 16:04:48.349986] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:53.887 [2024-12-09 16:04:48.350874] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:31:53.887 [2024-12-09 16:04:48.350907] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:53.887 [2024-12-09 16:04:48.429888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:53.887 [2024-12-09 16:04:48.472615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:53.887 [2024-12-09 16:04:48.472648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:53.887 [2024-12-09 16:04:48.472656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:53.887 [2024-12-09 16:04:48.472662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:53.887 [2024-12-09 16:04:48.472667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:53.887 [2024-12-09 16:04:48.474139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:53.887 [2024-12-09 16:04:48.474168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:53.887 [2024-12-09 16:04:48.474199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.887 [2024-12-09 16:04:48.474200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:53.887 [2024-12-09 16:04:48.542166] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:53.887 [2024-12-09 16:04:48.542556] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:53.887 [2024-12-09 16:04:48.543019] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:53.887 [2024-12-09 16:04:48.543192] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:53.887 [2024-12-09 16:04:48.543263] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.887 [2024-12-09 16:04:48.611112] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.887 Malloc0 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.887 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.888 [2024-12-09 16:04:48.691312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:53.888 test case1: single bdev can't be used in multiple subsystems 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.888 [2024-12-09 16:04:48.722806] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:53.888 [2024-12-09 16:04:48.722826] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:53.888 [2024-12-09 16:04:48.722834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:53.888 request: 00:31:53.888 { 00:31:53.888 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:53.888 "namespace": { 00:31:53.888 "bdev_name": "Malloc0", 00:31:53.888 "no_auto_visible": false, 00:31:53.888 "hide_metadata": false 00:31:53.888 }, 00:31:53.888 "method": "nvmf_subsystem_add_ns", 00:31:53.888 "req_id": 1 00:31:53.888 } 00:31:53.888 Got JSON-RPC error response 00:31:53.888 response: 00:31:53.888 { 00:31:53.888 "code": -32602, 00:31:53.888 "message": "Invalid parameters" 00:31:53.888 } 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:53.888 Adding namespace failed - expected result. 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:53.888 test case2: host connect to nvmf target in multiple paths 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.888 [2024-12-09 16:04:48.734897] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:53.888 16:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:54.147 16:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:54.147 16:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:54.147 16:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:54.147 16:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:54.147 16:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:56.176 16:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:56.176 16:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:56.176 16:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:56.176 16:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:56.176 16:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:56.176 16:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:56.176 16:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:56.176 [global] 00:31:56.176 thread=1 00:31:56.176 invalidate=1 00:31:56.176 rw=write 00:31:56.176 time_based=1 00:31:56.176 runtime=1 00:31:56.176 ioengine=libaio 00:31:56.176 direct=1 00:31:56.176 bs=4096 00:31:56.176 iodepth=1 00:31:56.176 norandommap=0 00:31:56.176 numjobs=1 00:31:56.176 00:31:56.176 verify_dump=1 00:31:56.176 verify_backlog=512 00:31:56.176 verify_state_save=0 00:31:56.176 do_verify=1 00:31:56.176 verify=crc32c-intel 00:31:56.176 [job0] 00:31:56.176 filename=/dev/nvme0n1 00:31:56.176 Could not set queue depth (nvme0n1) 00:31:56.458 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:56.458 fio-3.35 00:31:56.458 Starting 1 thread 00:31:57.831 00:31:57.831 job0: (groupid=0, jobs=1): err= 0: pid=2221386: Mon Dec 9 16:04:52 2024 00:31:57.831 read: IOPS=2254, BW=9017KiB/s (9234kB/s)(9288KiB/1030msec) 00:31:57.831 slat (nsec): min=6295, max=31762, avg=7069.73, stdev=1038.50 00:31:57.831 clat (usec): min=192, max=41034, avg=271.14, stdev=1195.39 00:31:57.831 lat (usec): min=199, max=41047, avg=278.21, stdev=1195.71 00:31:57.831 clat percentiles (usec): 00:31:57.831 | 1.00th=[ 198], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 206], 00:31:57.831 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 245], 60.00th=[ 249], 00:31:57.831 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 265], 95.00th=[ 269], 00:31:57.831 | 99.00th=[ 383], 99.50th=[ 388], 99.90th=[ 570], 99.95th=[40633], 00:31:57.831 | 99.99th=[41157] 00:31:57.831 write: IOPS=2485, BW=9942KiB/s (10.2MB/s)(10.0MiB/1030msec); 0 zone resets 00:31:57.831 slat (nsec): min=8889, max=46872, avg=10262.18, stdev=1272.07 00:31:57.831 clat (usec): min=121, max=315, avg=135.07, stdev= 5.91 00:31:57.831 lat (usec): min=132, max=361, avg=145.33, stdev= 6.44 00:31:57.831 clat percentiles (usec): 00:31:57.831 | 1.00th=[ 125], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 133], 00:31:57.831 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 135], 60.00th=[ 137], 00:31:57.831 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 141], 95.00th=[ 143], 00:31:57.831 | 99.00th=[ 149], 99.50th=[ 151], 99.90th=[ 196], 99.95th=[ 206], 00:31:57.831 | 99.99th=[ 314] 00:31:57.831 bw ( KiB/s): min= 8192, max=12288, per=100.00%, avg=10240.00, stdev=2896.31, samples=2 00:31:57.831 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:31:57.831 lat (usec) : 250=83.39%, 500=16.55%, 750=0.02% 00:31:57.831 lat (msec) : 50=0.04% 00:31:57.831 cpu : usr=3.11%, sys=3.59%, ctx=4882, majf=0, minf=1 00:31:57.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.831 issued rwts: total=2322,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:57.831 00:31:57.831 Run status group 0 (all jobs): 00:31:57.831 READ: bw=9017KiB/s (9234kB/s), 9017KiB/s-9017KiB/s (9234kB/s-9234kB/s), io=9288KiB (9511kB), run=1030-1030msec 00:31:57.831 WRITE: bw=9942KiB/s (10.2MB/s), 9942KiB/s-9942KiB/s (10.2MB/s-10.2MB/s), io=10.0MiB (10.5MB), run=1030-1030msec 00:31:57.831 00:31:57.831 Disk stats (read/write): 00:31:57.831 nvme0n1: ios=2181/2560, merge=0/0, ticks=709/327, in_queue=1036, util=95.59% 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:57.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:57.831 rmmod nvme_tcp 00:31:57.831 rmmod nvme_fabrics 00:31:57.831 rmmod nvme_keyring 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2220657 ']' 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2220657 00:31:57.831 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2220657 ']' 00:31:57.832 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2220657 00:31:57.832 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:57.832 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:57.832 16:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2220657 00:31:57.832 16:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:57.832 16:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:57.832 16:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2220657' 00:31:57.832 killing process with pid 2220657 00:31:57.832 16:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2220657 00:31:57.832 16:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2220657 00:31:58.091 16:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:58.091 16:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:58.091 16:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:58.091 16:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:58.091 16:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:58.091 16:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:58.091 16:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:58.091 16:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:58.091 16:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:58.091 16:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.091 16:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.091 16:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.639 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:00.639 00:32:00.639 real 0m13.177s 00:32:00.639 user 0m24.435s 00:32:00.639 sys 0m6.097s 00:32:00.639 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.639 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:00.639 ************************************ 00:32:00.639 END TEST nvmf_nmic 00:32:00.639 ************************************ 00:32:00.639 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:00.639 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:00.639 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:00.639 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:00.639 ************************************ 00:32:00.639 START TEST nvmf_fio_target 00:32:00.639 ************************************ 00:32:00.639 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:00.639 * Looking for test storage... 00:32:00.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:00.639 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:00.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.640 --rc genhtml_branch_coverage=1 00:32:00.640 --rc genhtml_function_coverage=1 00:32:00.640 --rc genhtml_legend=1 00:32:00.640 --rc geninfo_all_blocks=1 00:32:00.640 --rc geninfo_unexecuted_blocks=1 00:32:00.640 00:32:00.640 ' 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:00.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.640 --rc genhtml_branch_coverage=1 00:32:00.640 --rc genhtml_function_coverage=1 00:32:00.640 --rc genhtml_legend=1 00:32:00.640 --rc geninfo_all_blocks=1 00:32:00.640 --rc geninfo_unexecuted_blocks=1 00:32:00.640 00:32:00.640 ' 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:00.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.640 --rc genhtml_branch_coverage=1 00:32:00.640 --rc genhtml_function_coverage=1 00:32:00.640 --rc genhtml_legend=1 00:32:00.640 --rc geninfo_all_blocks=1 00:32:00.640 --rc geninfo_unexecuted_blocks=1 00:32:00.640 00:32:00.640 ' 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:00.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.640 --rc genhtml_branch_coverage=1 00:32:00.640 --rc genhtml_function_coverage=1 00:32:00.640 --rc genhtml_legend=1 00:32:00.640 --rc geninfo_all_blocks=1 00:32:00.640 --rc geninfo_unexecuted_blocks=1 00:32:00.640 00:32:00.640 ' 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.640 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:00.641 16:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.211 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:07.212 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:07.212 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:07.212 Found net devices under 0000:af:00.0: cvl_0_0 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:07.212 Found net devices under 0000:af:00.1: cvl_0_1 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:07.212 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:07.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:32:07.213 00:32:07.213 --- 10.0.0.2 ping statistics --- 00:32:07.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.213 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:32:07.213 00:32:07.213 --- 10.0.0.1 ping statistics --- 00:32:07.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.213 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2225021 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2225021 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2225021 ']' 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.213 [2024-12-09 16:05:01.504694] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:07.213 [2024-12-09 16:05:01.505576] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:32:07.213 [2024-12-09 16:05:01.505610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.213 [2024-12-09 16:05:01.584863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:07.213 [2024-12-09 16:05:01.624182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:07.213 [2024-12-09 16:05:01.624229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:07.213 [2024-12-09 16:05:01.624239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:07.213 [2024-12-09 16:05:01.624248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:07.213 [2024-12-09 16:05:01.624256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:07.213 [2024-12-09 16:05:01.625819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.213 [2024-12-09 16:05:01.625926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:07.213 [2024-12-09 16:05:01.626034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.213 [2024-12-09 16:05:01.626035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:07.213 [2024-12-09 16:05:01.695437] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:07.213 [2024-12-09 16:05:01.695987] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:07.213 [2024-12-09 16:05:01.696397] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:07.213 [2024-12-09 16:05:01.696530] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:07.213 [2024-12-09 16:05:01.696607] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:07.213 [2024-12-09 16:05:01.934735] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.213 16:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:07.213 16:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:07.213 16:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:07.213 16:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:07.213 16:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:07.473 16:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:07.473 16:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:07.731 16:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:07.732 16:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:07.990 16:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:08.248 16:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:08.248 16:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:08.248 16:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:08.248 16:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:08.507 16:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:08.507 16:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:08.765 16:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:09.024 16:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:09.024 16:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:09.024 16:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:09.024 16:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:09.282 16:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.541 [2024-12-09 16:05:04.594687] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.541 16:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:09.800 16:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:09.800 16:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:10.369 16:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:10.369 16:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:10.369 16:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:10.369 16:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:10.369 16:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:10.369 16:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:12.272 16:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:12.272 16:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:12.272 16:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:12.272 16:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:12.272 16:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:12.272 16:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:12.272 16:05:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:12.272 [global] 00:32:12.272 thread=1 00:32:12.272 invalidate=1 00:32:12.272 rw=write 00:32:12.272 time_based=1 00:32:12.272 runtime=1 00:32:12.272 ioengine=libaio 00:32:12.272 direct=1 00:32:12.272 bs=4096 00:32:12.272 iodepth=1 00:32:12.272 norandommap=0 00:32:12.272 numjobs=1 00:32:12.272 00:32:12.272 verify_dump=1 00:32:12.272 verify_backlog=512 00:32:12.273 verify_state_save=0 00:32:12.273 do_verify=1 00:32:12.273 verify=crc32c-intel 00:32:12.273 [job0] 00:32:12.273 filename=/dev/nvme0n1 00:32:12.273 [job1] 00:32:12.273 filename=/dev/nvme0n2 00:32:12.273 [job2] 00:32:12.273 filename=/dev/nvme0n3 00:32:12.273 [job3] 00:32:12.273 filename=/dev/nvme0n4 00:32:12.273 Could not set queue depth (nvme0n1) 00:32:12.273 Could not set queue depth (nvme0n2) 00:32:12.273 Could not set queue depth (nvme0n3) 00:32:12.273 Could not set queue depth (nvme0n4) 00:32:12.531 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:12.531 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:12.531 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:12.531 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:12.531 fio-3.35 00:32:12.531 Starting 4 threads 00:32:13.904 00:32:13.904 job0: (groupid=0, jobs=1): err= 0: pid=2226300: Mon Dec 9 16:05:08 2024 00:32:13.904 read: IOPS=1997, BW=7988KiB/s (8180kB/s)(8220KiB/1029msec) 00:32:13.904 slat (nsec): min=6296, max=26314, avg=7152.93, stdev=1113.85 00:32:13.904 clat (usec): min=173, max=40969, avg=287.03, stdev=1783.25 00:32:13.904 lat (usec): min=180, max=40992, avg=294.18, stdev=1783.47 00:32:13.904 clat percentiles (usec): 00:32:13.904 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 198], 20.00th=[ 202], 00:32:13.904 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 210], 00:32:13.904 | 70.00th=[ 212], 80.00th=[ 215], 90.00th=[ 219], 95.00th=[ 223], 00:32:13.904 | 99.00th=[ 237], 99.50th=[ 260], 99.90th=[40633], 99.95th=[40633], 00:32:13.904 | 99.99th=[41157] 00:32:13.904 write: IOPS=2487, BW=9951KiB/s (10.2MB/s)(10.0MiB/1029msec); 0 zone resets 00:32:13.904 slat (nsec): min=9023, max=42153, avg=10275.68, stdev=1425.38 00:32:13.904 clat (usec): min=127, max=328, avg=151.13, stdev=12.84 00:32:13.904 lat (usec): min=137, max=364, avg=161.40, stdev=13.34 00:32:13.904 clat percentiles (usec): 00:32:13.904 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 143], 00:32:13.904 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 153], 00:32:13.904 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 172], 00:32:13.904 | 99.00th=[ 190], 99.50th=[ 200], 99.90th=[ 233], 99.95th=[ 322], 00:32:13.904 | 99.99th=[ 330] 00:32:13.904 bw ( KiB/s): min= 8192, max=12288, per=43.62%, avg=10240.00, stdev=2896.31, samples=2 00:32:13.904 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:32:13.904 lat (usec) : 250=99.67%, 500=0.20%, 750=0.04% 00:32:13.904 lat (msec) : 50=0.09% 00:32:13.904 cpu : usr=2.14%, sys=4.28%, ctx=4615, majf=0, minf=2 00:32:13.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.904 issued rwts: total=2055,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:13.904 job1: (groupid=0, jobs=1): err= 0: pid=2226301: Mon Dec 9 16:05:08 2024 00:32:13.904 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:32:13.904 slat (nsec): min=11316, max=26825, avg=21189.14, stdev=2561.23 00:32:13.904 clat (usec): min=31804, max=41149, avg=40555.50, stdev=1955.58 00:32:13.904 lat (usec): min=31825, max=41161, avg=40576.69, stdev=1955.45 00:32:13.904 clat percentiles (usec): 00:32:13.904 | 1.00th=[31851], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:13.904 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:13.904 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:13.904 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:13.904 | 99.99th=[41157] 00:32:13.904 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:32:13.904 slat (nsec): min=11253, max=36524, avg=13198.49, stdev=2404.66 00:32:13.904 clat (usec): min=146, max=311, avg=206.51, stdev=32.54 00:32:13.904 lat (usec): min=159, max=348, avg=219.71, stdev=33.12 00:32:13.904 clat percentiles (usec): 00:32:13.904 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 172], 00:32:13.904 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 202], 60.00th=[ 235], 00:32:13.904 | 70.00th=[ 237], 80.00th=[ 239], 90.00th=[ 241], 95.00th=[ 245], 00:32:13.904 | 99.00th=[ 253], 99.50th=[ 273], 99.90th=[ 314], 99.95th=[ 314], 00:32:13.904 | 99.99th=[ 314] 00:32:13.904 bw ( KiB/s): min= 4096, max= 4096, per=17.45%, avg=4096.00, stdev= 0.00, samples=1 00:32:13.904 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:13.904 lat (usec) : 250=94.57%, 500=1.31% 00:32:13.904 lat (msec) : 50=4.12% 00:32:13.904 cpu : usr=0.50%, sys=0.99%, ctx=536, majf=0, minf=1 00:32:13.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.904 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:13.904 job2: (groupid=0, jobs=1): err= 0: pid=2226302: Mon Dec 9 16:05:08 2024 00:32:13.904 read: IOPS=552, BW=2210KiB/s (2263kB/s)(2252KiB/1019msec) 00:32:13.904 slat (nsec): min=6799, max=25659, avg=8233.32, stdev=2826.18 00:32:13.904 clat (usec): min=217, max=41633, avg=1469.25, stdev=7013.00 00:32:13.904 lat (usec): min=224, max=41640, avg=1477.48, stdev=7013.56 00:32:13.904 clat percentiles (usec): 00:32:13.904 | 1.00th=[ 219], 5.00th=[ 221], 10.00th=[ 223], 20.00th=[ 225], 00:32:13.904 | 30.00th=[ 227], 40.00th=[ 229], 50.00th=[ 231], 60.00th=[ 233], 00:32:13.904 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 253], 00:32:13.904 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:13.904 | 99.99th=[41681] 00:32:13.904 write: IOPS=1004, BW=4020KiB/s (4116kB/s)(4096KiB/1019msec); 0 zone resets 00:32:13.904 slat (nsec): min=9405, max=41802, avg=10871.31, stdev=1570.88 00:32:13.904 clat (usec): min=139, max=368, avg=168.34, stdev=17.48 00:32:13.904 lat (usec): min=149, max=402, avg=179.21, stdev=18.13 00:32:13.904 clat percentiles (usec): 00:32:13.904 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 155], 00:32:13.904 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:32:13.904 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 200], 00:32:13.904 | 99.00th=[ 210], 99.50th=[ 227], 99.90th=[ 351], 99.95th=[ 371], 00:32:13.904 | 99.99th=[ 371] 00:32:13.904 bw ( KiB/s): min= 8192, max= 8192, per=34.90%, avg=8192.00, stdev= 0.00, samples=1 00:32:13.904 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:13.904 lat (usec) : 250=97.48%, 500=1.45% 00:32:13.904 lat (msec) : 50=1.07% 00:32:13.904 cpu : usr=1.18%, sys=1.08%, ctx=1587, majf=0, minf=2 00:32:13.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.904 issued rwts: total=563,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:13.904 job3: (groupid=0, jobs=1): err= 0: pid=2226303: Mon Dec 9 16:05:08 2024 00:32:13.904 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:32:13.904 slat (nsec): min=6861, max=26789, avg=7720.70, stdev=1260.50 00:32:13.904 clat (usec): min=211, max=41001, avg=407.00, stdev=2529.36 00:32:13.904 lat (usec): min=218, max=41025, avg=414.72, stdev=2530.21 00:32:13.904 clat percentiles (usec): 00:32:13.904 | 1.00th=[ 221], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 241], 00:32:13.904 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:32:13.904 | 70.00th=[ 253], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 265], 00:32:13.904 | 99.00th=[ 281], 99.50th=[ 322], 99.90th=[41157], 99.95th=[41157], 00:32:13.904 | 99.99th=[41157] 00:32:13.904 write: IOPS=1941, BW=7764KiB/s (7951kB/s)(7772KiB/1001msec); 0 zone resets 00:32:13.904 slat (nsec): min=9735, max=38004, avg=11114.61, stdev=1200.74 00:32:13.904 clat (usec): min=128, max=362, avg=171.50, stdev=21.10 00:32:13.904 lat (usec): min=139, max=400, avg=182.62, stdev=21.28 00:32:13.904 clat percentiles (usec): 00:32:13.904 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 153], 00:32:13.904 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:32:13.904 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 204], 95.00th=[ 210], 00:32:13.904 | 99.00th=[ 221], 99.50th=[ 225], 99.90th=[ 318], 99.95th=[ 363], 00:32:13.904 | 99.99th=[ 363] 00:32:13.904 bw ( KiB/s): min= 5072, max= 5072, per=21.61%, avg=5072.00, stdev= 0.00, samples=1 00:32:13.904 iops : min= 1268, max= 1268, avg=1268.00, stdev= 0.00, samples=1 00:32:13.904 lat (usec) : 250=82.41%, 500=17.42% 00:32:13.905 lat (msec) : 50=0.17% 00:32:13.905 cpu : usr=2.20%, sys=3.10%, ctx=3480, majf=0, minf=1 00:32:13.905 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.905 issued rwts: total=1536,1943,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.905 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:13.905 00:32:13.905 Run status group 0 (all jobs): 00:32:13.905 READ: bw=15.9MiB/s (16.6MB/s), 87.4KiB/s-7988KiB/s (89.5kB/s-8180kB/s), io=16.3MiB (17.1MB), run=1001-1029msec 00:32:13.905 WRITE: bw=22.9MiB/s (24.0MB/s), 2034KiB/s-9951KiB/s (2083kB/s-10.2MB/s), io=23.6MiB (24.7MB), run=1001-1029msec 00:32:13.905 00:32:13.905 Disk stats (read/write): 00:32:13.905 nvme0n1: ios=2098/2434, merge=0/0, ticks=438/351, in_queue=789, util=86.37% 00:32:13.905 nvme0n2: ios=67/512, merge=0/0, ticks=1660/101, in_queue=1761, util=97.76% 00:32:13.905 nvme0n3: ios=550/1024, merge=0/0, ticks=623/167, in_queue=790, util=88.78% 00:32:13.905 nvme0n4: ios=1243/1536, merge=0/0, ticks=764/269, in_queue=1033, util=97.67% 00:32:13.905 16:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:13.905 [global] 00:32:13.905 thread=1 00:32:13.905 invalidate=1 00:32:13.905 rw=randwrite 00:32:13.905 time_based=1 00:32:13.905 runtime=1 00:32:13.905 ioengine=libaio 00:32:13.905 direct=1 00:32:13.905 bs=4096 00:32:13.905 iodepth=1 00:32:13.905 norandommap=0 00:32:13.905 numjobs=1 00:32:13.905 00:32:13.905 verify_dump=1 00:32:13.905 verify_backlog=512 00:32:13.905 verify_state_save=0 00:32:13.905 do_verify=1 00:32:13.905 verify=crc32c-intel 00:32:13.905 [job0] 00:32:13.905 filename=/dev/nvme0n1 00:32:13.905 [job1] 00:32:13.905 filename=/dev/nvme0n2 00:32:13.905 [job2] 00:32:13.905 filename=/dev/nvme0n3 00:32:13.905 [job3] 00:32:13.905 filename=/dev/nvme0n4 00:32:13.905 Could not set queue depth (nvme0n1) 00:32:13.905 Could not set queue depth (nvme0n2) 00:32:13.905 Could not set queue depth (nvme0n3) 00:32:13.905 Could not set queue depth (nvme0n4) 00:32:14.162 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:14.162 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:14.162 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:14.162 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:14.162 fio-3.35 00:32:14.162 Starting 4 threads 00:32:15.536 00:32:15.536 job0: (groupid=0, jobs=1): err= 0: pid=2226671: Mon Dec 9 16:05:10 2024 00:32:15.536 read: IOPS=22, BW=89.0KiB/s (91.1kB/s)(92.0KiB/1034msec) 00:32:15.536 slat (nsec): min=10149, max=34468, avg=24025.78, stdev=4888.00 00:32:15.536 clat (usec): min=458, max=41043, avg=39191.92, stdev=8444.62 00:32:15.536 lat (usec): min=492, max=41068, avg=39215.94, stdev=8442.37 00:32:15.536 clat percentiles (usec): 00:32:15.536 | 1.00th=[ 457], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:15.536 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:15.536 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:15.537 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:15.537 | 99.99th=[41157] 00:32:15.537 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:32:15.537 slat (nsec): min=10372, max=50256, avg=12106.99, stdev=2332.35 00:32:15.537 clat (usec): min=215, max=281, avg=241.30, stdev= 6.32 00:32:15.537 lat (usec): min=230, max=300, avg=253.41, stdev= 6.44 00:32:15.537 clat percentiles (usec): 00:32:15.537 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 239], 00:32:15.537 | 30.00th=[ 239], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 241], 00:32:15.537 | 70.00th=[ 243], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 253], 00:32:15.537 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 281], 99.95th=[ 281], 00:32:15.537 | 99.99th=[ 281] 00:32:15.537 bw ( KiB/s): min= 4087, max= 4087, per=34.42%, avg=4087.00, stdev= 0.00, samples=1 00:32:15.537 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:32:15.537 lat (usec) : 250=88.79%, 500=7.10% 00:32:15.537 lat (msec) : 50=4.11% 00:32:15.537 cpu : usr=0.29%, sys=1.06%, ctx=536, majf=0, minf=1 00:32:15.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.537 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:15.537 job1: (groupid=0, jobs=1): err= 0: pid=2226672: Mon Dec 9 16:05:10 2024 00:32:15.537 read: IOPS=30, BW=124KiB/s (127kB/s)(128KiB/1035msec) 00:32:15.537 slat (nsec): min=7045, max=24149, avg=18283.72, stdev=6683.97 00:32:15.537 clat (usec): min=229, max=41726, avg=29500.23, stdev=18592.26 00:32:15.537 lat (usec): min=237, max=41736, avg=29518.51, stdev=18591.49 00:32:15.537 clat percentiles (usec): 00:32:15.537 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 247], 20.00th=[ 255], 00:32:15.537 | 30.00th=[40633], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157], 00:32:15.537 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:15.537 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:15.537 | 99.99th=[41681] 00:32:15.537 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:32:15.537 slat (nsec): min=9022, max=40493, avg=10338.94, stdev=1737.11 00:32:15.537 clat (usec): min=137, max=349, avg=163.21, stdev=23.77 00:32:15.537 lat (usec): min=147, max=390, avg=173.55, stdev=24.27 00:32:15.537 clat percentiles (usec): 00:32:15.537 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:32:15.537 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:32:15.537 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 241], 00:32:15.537 | 99.00th=[ 245], 99.50th=[ 247], 99.90th=[ 351], 99.95th=[ 351], 00:32:15.537 | 99.99th=[ 351] 00:32:15.537 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:32:15.537 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:15.537 lat (usec) : 250=94.49%, 500=1.29% 00:32:15.537 lat (msec) : 50=4.23% 00:32:15.537 cpu : usr=0.19%, sys=0.58%, ctx=544, majf=0, minf=2 00:32:15.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.537 issued rwts: total=32,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:15.537 job2: (groupid=0, jobs=1): err= 0: pid=2226673: Mon Dec 9 16:05:10 2024 00:32:15.537 read: IOPS=1367, BW=5470KiB/s (5601kB/s)(5508KiB/1007msec) 00:32:15.537 slat (nsec): min=7365, max=37244, avg=8607.73, stdev=2101.94 00:32:15.537 clat (usec): min=186, max=41109, avg=531.48, stdev=3624.00 00:32:15.537 lat (usec): min=197, max=41132, avg=540.09, stdev=3625.03 00:32:15.537 clat percentiles (usec): 00:32:15.537 | 1.00th=[ 192], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 198], 00:32:15.537 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 204], 00:32:15.537 | 70.00th=[ 206], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 237], 00:32:15.537 | 99.00th=[ 506], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:15.537 | 99.99th=[41157] 00:32:15.537 write: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec); 0 zone resets 00:32:15.537 slat (nsec): min=10549, max=46063, avg=11807.83, stdev=1797.89 00:32:15.537 clat (usec): min=117, max=294, avg=153.40, stdev=17.93 00:32:15.537 lat (usec): min=145, max=306, avg=165.21, stdev=18.28 00:32:15.537 clat percentiles (usec): 00:32:15.537 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 139], 20.00th=[ 141], 00:32:15.537 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:32:15.537 | 70.00th=[ 161], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 186], 00:32:15.537 | 99.00th=[ 198], 99.50th=[ 219], 99.90th=[ 269], 99.95th=[ 293], 00:32:15.537 | 99.99th=[ 293] 00:32:15.537 bw ( KiB/s): min=12263, max=12263, per=100.00%, avg=12263.00, stdev= 0.00, samples=1 00:32:15.537 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:32:15.537 lat (usec) : 250=98.70%, 500=0.79%, 750=0.14% 00:32:15.537 lat (msec) : 50=0.38% 00:32:15.537 cpu : usr=2.19%, sys=4.87%, ctx=2914, majf=0, minf=1 00:32:15.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.537 issued rwts: total=1377,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:15.537 job3: (groupid=0, jobs=1): err= 0: pid=2226674: Mon Dec 9 16:05:10 2024 00:32:15.537 read: IOPS=21, BW=86.7KiB/s (88.8kB/s)(88.0KiB/1015msec) 00:32:15.537 slat (nsec): min=9498, max=28837, avg=23521.45, stdev=3344.91 00:32:15.537 clat (usec): min=40854, max=41882, avg=41033.20, stdev=213.10 00:32:15.537 lat (usec): min=40878, max=41906, avg=41056.72, stdev=212.08 00:32:15.537 clat percentiles (usec): 00:32:15.537 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:15.537 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:15.537 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:15.537 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:15.537 | 99.99th=[41681] 00:32:15.537 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:32:15.537 slat (nsec): min=9970, max=47744, avg=11139.69, stdev=2136.15 00:32:15.537 clat (usec): min=140, max=358, avg=203.68, stdev=31.17 00:32:15.537 lat (usec): min=151, max=406, avg=214.82, stdev=31.53 00:32:15.537 clat percentiles (usec): 00:32:15.537 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 174], 00:32:15.537 | 30.00th=[ 184], 40.00th=[ 194], 50.00th=[ 204], 60.00th=[ 217], 00:32:15.537 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 251], 00:32:15.537 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 359], 99.95th=[ 359], 00:32:15.537 | 99.99th=[ 359] 00:32:15.537 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:32:15.537 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:15.537 lat (usec) : 250=90.26%, 500=5.62% 00:32:15.537 lat (msec) : 50=4.12% 00:32:15.537 cpu : usr=0.30%, sys=0.59%, ctx=535, majf=0, minf=1 00:32:15.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.537 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:15.537 00:32:15.537 Run status group 0 (all jobs): 00:32:15.537 READ: bw=5619KiB/s (5754kB/s), 86.7KiB/s-5470KiB/s (88.8kB/s-5601kB/s), io=5816KiB (5956kB), run=1007-1035msec 00:32:15.537 WRITE: bw=11.6MiB/s (12.2MB/s), 1979KiB/s-6101KiB/s (2026kB/s-6248kB/s), io=12.0MiB (12.6MB), run=1007-1035msec 00:32:15.537 00:32:15.537 Disk stats (read/write): 00:32:15.537 nvme0n1: ios=43/512, merge=0/0, ticks=1158/114, in_queue=1272, util=99.40% 00:32:15.537 nvme0n2: ios=21/512, merge=0/0, ticks=739/83, in_queue=822, util=86.27% 00:32:15.537 nvme0n3: ios=1395/1536, merge=0/0, ticks=1458/219, in_queue=1677, util=96.73% 00:32:15.537 nvme0n4: ios=57/512, merge=0/0, ticks=1052/100, in_queue=1152, util=98.62% 00:32:15.537 16:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:15.537 [global] 00:32:15.537 thread=1 00:32:15.537 invalidate=1 00:32:15.537 rw=write 00:32:15.537 time_based=1 00:32:15.537 runtime=1 00:32:15.537 ioengine=libaio 00:32:15.537 direct=1 00:32:15.537 bs=4096 00:32:15.537 iodepth=128 00:32:15.537 norandommap=0 00:32:15.537 numjobs=1 00:32:15.537 00:32:15.537 verify_dump=1 00:32:15.538 verify_backlog=512 00:32:15.538 verify_state_save=0 00:32:15.538 do_verify=1 00:32:15.538 verify=crc32c-intel 00:32:15.538 [job0] 00:32:15.538 filename=/dev/nvme0n1 00:32:15.538 [job1] 00:32:15.538 filename=/dev/nvme0n2 00:32:15.538 [job2] 00:32:15.538 filename=/dev/nvme0n3 00:32:15.538 [job3] 00:32:15.538 filename=/dev/nvme0n4 00:32:15.538 Could not set queue depth (nvme0n1) 00:32:15.538 Could not set queue depth (nvme0n2) 00:32:15.538 Could not set queue depth (nvme0n3) 00:32:15.538 Could not set queue depth (nvme0n4) 00:32:15.796 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:15.796 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:15.796 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:15.796 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:15.796 fio-3.35 00:32:15.796 Starting 4 threads 00:32:17.194 00:32:17.194 job0: (groupid=0, jobs=1): err= 0: pid=2227041: Mon Dec 9 16:05:12 2024 00:32:17.194 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:32:17.194 slat (nsec): min=1062, max=11702k, avg=84004.19, stdev=577424.93 00:32:17.194 clat (usec): min=1353, max=45949, avg=11297.02, stdev=6199.47 00:32:17.194 lat (usec): min=1365, max=45959, avg=11381.02, stdev=6247.18 00:32:17.194 clat percentiles (usec): 00:32:17.194 | 1.00th=[ 1844], 5.00th=[ 5145], 10.00th=[ 6718], 20.00th=[ 7832], 00:32:17.194 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10290], 00:32:17.194 | 70.00th=[11731], 80.00th=[12911], 90.00th=[17171], 95.00th=[26346], 00:32:17.194 | 99.00th=[36439], 99.50th=[41157], 99.90th=[44827], 99.95th=[45876], 00:32:17.194 | 99.99th=[45876] 00:32:17.194 write: IOPS=5019, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1007msec); 0 zone resets 00:32:17.194 slat (nsec): min=1762, max=22167k, avg=102856.77, stdev=713924.16 00:32:17.194 clat (usec): min=368, max=51940, avg=14927.37, stdev=10895.75 00:32:17.194 lat (usec): min=375, max=51943, avg=15030.22, stdev=10973.67 00:32:17.194 clat percentiles (usec): 00:32:17.194 | 1.00th=[ 1647], 5.00th=[ 5342], 10.00th=[ 6980], 20.00th=[ 7767], 00:32:17.194 | 30.00th=[ 8455], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[11994], 00:32:17.194 | 70.00th=[15270], 80.00th=[20841], 90.00th=[34341], 95.00th=[41157], 00:32:17.194 | 99.00th=[47973], 99.50th=[49546], 99.90th=[52167], 99.95th=[52167], 00:32:17.194 | 99.99th=[52167] 00:32:17.194 bw ( KiB/s): min=16656, max=22760, per=29.34%, avg=19708.00, stdev=4316.18, samples=2 00:32:17.194 iops : min= 4164, max= 5690, avg=4927.00, stdev=1079.04, samples=2 00:32:17.194 lat (usec) : 500=0.04%, 750=0.01% 00:32:17.194 lat (msec) : 2=1.17%, 4=1.33%, 10=48.11%, 20=34.42%, 50=14.67% 00:32:17.194 lat (msec) : 100=0.24% 00:32:17.194 cpu : usr=3.28%, sys=5.27%, ctx=428, majf=0, minf=1 00:32:17.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:17.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:17.194 issued rwts: total=4608,5055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:17.194 job1: (groupid=0, jobs=1): err= 0: pid=2227042: Mon Dec 9 16:05:12 2024 00:32:17.194 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:32:17.194 slat (nsec): min=1564, max=9873.6k, avg=109942.83, stdev=644372.83 00:32:17.194 clat (usec): min=4816, max=38936, avg=14703.35, stdev=7733.55 00:32:17.194 lat (usec): min=4827, max=42400, avg=14813.29, stdev=7769.53 00:32:17.194 clat percentiles (usec): 00:32:17.194 | 1.00th=[ 7570], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8848], 00:32:17.194 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10683], 60.00th=[11600], 00:32:17.194 | 70.00th=[17695], 80.00th=[22938], 90.00th=[25560], 95.00th=[31065], 00:32:17.194 | 99.00th=[38011], 99.50th=[38011], 99.90th=[39060], 99.95th=[39060], 00:32:17.194 | 99.99th=[39060] 00:32:17.194 write: IOPS=3946, BW=15.4MiB/s (16.2MB/s)(15.5MiB/1006msec); 0 zone resets 00:32:17.194 slat (nsec): min=1919, max=18931k, avg=144365.03, stdev=912245.10 00:32:17.194 clat (msec): min=2, max=108, avg=18.72, stdev=16.84 00:32:17.194 lat (msec): min=2, max=108, avg=18.86, stdev=16.94 00:32:17.194 clat percentiles (msec): 00:32:17.194 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:32:17.194 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 16], 00:32:17.194 | 70.00th=[ 16], 80.00th=[ 27], 90.00th=[ 42], 95.00th=[ 53], 00:32:17.194 | 99.00th=[ 92], 99.50th=[ 101], 99.90th=[ 109], 99.95th=[ 109], 00:32:17.194 | 99.99th=[ 109] 00:32:17.194 bw ( KiB/s): min=14352, max=16384, per=22.88%, avg=15368.00, stdev=1436.84, samples=2 00:32:17.194 iops : min= 3588, max= 4096, avg=3842.00, stdev=359.21, samples=2 00:32:17.194 lat (msec) : 4=0.19%, 10=33.55%, 20=41.92%, 50=21.59%, 100=2.46% 00:32:17.194 lat (msec) : 250=0.29% 00:32:17.194 cpu : usr=3.88%, sys=3.38%, ctx=399, majf=0, minf=1 00:32:17.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:17.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:17.194 issued rwts: total=3584,3970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:17.195 job2: (groupid=0, jobs=1): err= 0: pid=2227044: Mon Dec 9 16:05:12 2024 00:32:17.195 read: IOPS=3019, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1003msec) 00:32:17.195 slat (nsec): min=1212, max=20452k, avg=173805.80, stdev=1216059.31 00:32:17.195 clat (usec): min=2681, max=66926, avg=21581.50, stdev=12072.29 00:32:17.195 lat (usec): min=2688, max=66951, avg=21755.31, stdev=12172.70 00:32:17.195 clat percentiles (usec): 00:32:17.195 | 1.00th=[ 3785], 5.00th=[ 7635], 10.00th=[10290], 20.00th=[12256], 00:32:17.195 | 30.00th=[12649], 40.00th=[15270], 50.00th=[17957], 60.00th=[20841], 00:32:17.195 | 70.00th=[26346], 80.00th=[30802], 90.00th=[38536], 95.00th=[47973], 00:32:17.195 | 99.00th=[54789], 99.50th=[54789], 99.90th=[63177], 99.95th=[66847], 00:32:17.195 | 99.99th=[66847] 00:32:17.195 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:32:17.195 slat (usec): min=2, max=25357, avg=148.76, stdev=1065.74 00:32:17.195 clat (usec): min=3891, max=61379, avg=20149.51, stdev=11116.18 00:32:17.195 lat (usec): min=3897, max=61408, avg=20298.28, stdev=11220.03 00:32:17.195 clat percentiles (usec): 00:32:17.195 | 1.00th=[ 4228], 5.00th=[ 8455], 10.00th=[11076], 20.00th=[11994], 00:32:17.195 | 30.00th=[12256], 40.00th=[13960], 50.00th=[15008], 60.00th=[18744], 00:32:17.195 | 70.00th=[24249], 80.00th=[31327], 90.00th=[38011], 95.00th=[41681], 00:32:17.195 | 99.00th=[52691], 99.50th=[55837], 99.90th=[58459], 99.95th=[58459], 00:32:17.195 | 99.99th=[61604] 00:32:17.195 bw ( KiB/s): min=12288, max=12288, per=18.30%, avg=12288.00, stdev= 0.00, samples=2 00:32:17.195 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:32:17.195 lat (msec) : 4=0.70%, 10=8.46%, 20=49.60%, 50=38.35%, 100=2.88% 00:32:17.195 cpu : usr=1.20%, sys=4.19%, ctx=231, majf=0, minf=1 00:32:17.195 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:32:17.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:17.195 issued rwts: total=3029,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:17.195 job3: (groupid=0, jobs=1): err= 0: pid=2227045: Mon Dec 9 16:05:12 2024 00:32:17.195 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:32:17.195 slat (nsec): min=1985, max=11650k, avg=94143.27, stdev=757081.06 00:32:17.195 clat (usec): min=4236, max=39782, avg=12481.02, stdev=4327.11 00:32:17.195 lat (usec): min=4246, max=39788, avg=12575.17, stdev=4390.21 00:32:17.195 clat percentiles (usec): 00:32:17.195 | 1.00th=[ 7111], 5.00th=[ 7635], 10.00th=[ 8029], 20.00th=[ 8848], 00:32:17.195 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11600], 60.00th=[12649], 00:32:17.195 | 70.00th=[13698], 80.00th=[14746], 90.00th=[17171], 95.00th=[20579], 00:32:17.195 | 99.00th=[27919], 99.50th=[34866], 99.90th=[39584], 99.95th=[39584], 00:32:17.195 | 99.99th=[39584] 00:32:17.195 write: IOPS=4813, BW=18.8MiB/s (19.7MB/s)(19.0MiB/1010msec); 0 zone resets 00:32:17.195 slat (usec): min=2, max=23760, avg=108.11, stdev=770.94 00:32:17.195 clat (usec): min=3359, max=39777, avg=14438.58, stdev=7400.13 00:32:17.195 lat (usec): min=3368, max=39783, avg=14546.69, stdev=7458.37 00:32:17.195 clat percentiles (usec): 00:32:17.195 | 1.00th=[ 6128], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 8356], 00:32:17.195 | 30.00th=[ 9241], 40.00th=[10683], 50.00th=[11469], 60.00th=[12518], 00:32:17.195 | 70.00th=[15926], 80.00th=[20579], 90.00th=[26346], 95.00th=[30278], 00:32:17.195 | 99.00th=[33162], 99.50th=[33817], 99.90th=[36963], 99.95th=[39584], 00:32:17.195 | 99.99th=[39584] 00:32:17.195 bw ( KiB/s): min=18552, max=19320, per=28.19%, avg=18936.00, stdev=543.06, samples=2 00:32:17.195 iops : min= 4638, max= 4830, avg=4734.00, stdev=135.76, samples=2 00:32:17.195 lat (msec) : 4=0.13%, 10=31.22%, 20=54.69%, 50=13.96% 00:32:17.195 cpu : usr=4.36%, sys=7.14%, ctx=286, majf=0, minf=1 00:32:17.195 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:17.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:17.195 issued rwts: total=4608,4862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:17.195 00:32:17.195 Run status group 0 (all jobs): 00:32:17.195 READ: bw=61.2MiB/s (64.2MB/s), 11.8MiB/s-17.9MiB/s (12.4MB/s-18.7MB/s), io=61.8MiB (64.8MB), run=1003-1010msec 00:32:17.195 WRITE: bw=65.6MiB/s (68.8MB/s), 12.0MiB/s-19.6MiB/s (12.5MB/s-20.6MB/s), io=66.2MiB (69.5MB), run=1003-1010msec 00:32:17.195 00:32:17.195 Disk stats (read/write): 00:32:17.195 nvme0n1: ios=4146/4143, merge=0/0, ticks=31150/44090, in_queue=75240, util=84.97% 00:32:17.195 nvme0n2: ios=3090/3159, merge=0/0, ticks=19651/23391, in_queue=43042, util=97.56% 00:32:17.195 nvme0n3: ios=2560/2814, merge=0/0, ticks=21635/22642, in_queue=44277, util=87.12% 00:32:17.195 nvme0n4: ios=3623/4096, merge=0/0, ticks=42908/58354, in_queue=101262, util=100.00% 00:32:17.195 16:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:17.195 [global] 00:32:17.195 thread=1 00:32:17.195 invalidate=1 00:32:17.195 rw=randwrite 00:32:17.195 time_based=1 00:32:17.195 runtime=1 00:32:17.195 ioengine=libaio 00:32:17.195 direct=1 00:32:17.195 bs=4096 00:32:17.195 iodepth=128 00:32:17.195 norandommap=0 00:32:17.195 numjobs=1 00:32:17.195 00:32:17.195 verify_dump=1 00:32:17.195 verify_backlog=512 00:32:17.195 verify_state_save=0 00:32:17.195 do_verify=1 00:32:17.195 verify=crc32c-intel 00:32:17.195 [job0] 00:32:17.195 filename=/dev/nvme0n1 00:32:17.195 [job1] 00:32:17.195 filename=/dev/nvme0n2 00:32:17.195 [job2] 00:32:17.195 filename=/dev/nvme0n3 00:32:17.195 [job3] 00:32:17.195 filename=/dev/nvme0n4 00:32:17.195 Could not set queue depth (nvme0n1) 00:32:17.195 Could not set queue depth (nvme0n2) 00:32:17.195 Could not set queue depth (nvme0n3) 00:32:17.195 Could not set queue depth (nvme0n4) 00:32:17.454 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:17.454 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:17.454 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:17.454 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:17.454 fio-3.35 00:32:17.454 Starting 4 threads 00:32:18.833 00:32:18.833 job0: (groupid=0, jobs=1): err= 0: pid=2227410: Mon Dec 9 16:05:13 2024 00:32:18.833 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:32:18.833 slat (nsec): min=1289, max=11041k, avg=95167.24, stdev=739481.08 00:32:18.833 clat (usec): min=2967, max=28952, avg=12084.10, stdev=3584.31 00:32:18.833 lat (usec): min=2980, max=35380, avg=12179.27, stdev=3652.68 00:32:18.833 clat percentiles (usec): 00:32:18.833 | 1.00th=[ 6063], 5.00th=[ 8094], 10.00th=[ 8979], 20.00th=[ 9372], 00:32:18.833 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10945], 60.00th=[11469], 00:32:18.833 | 70.00th=[13042], 80.00th=[15401], 90.00th=[18220], 95.00th=[19268], 00:32:18.833 | 99.00th=[20841], 99.50th=[24249], 99.90th=[26084], 99.95th=[27919], 00:32:18.833 | 99.99th=[28967] 00:32:18.833 write: IOPS=5296, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1007msec); 0 zone resets 00:32:18.833 slat (usec): min=2, max=22140, avg=91.76, stdev=718.06 00:32:18.833 clat (usec): min=1494, max=50693, avg=12349.70, stdev=7178.35 00:32:18.833 lat (usec): min=1531, max=50697, avg=12441.46, stdev=7225.07 00:32:18.833 clat percentiles (usec): 00:32:18.833 | 1.00th=[ 4228], 5.00th=[ 6521], 10.00th=[ 7177], 20.00th=[ 8979], 00:32:18.833 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[11338], 00:32:18.833 | 70.00th=[11600], 80.00th=[11863], 90.00th=[20579], 95.00th=[25297], 00:32:18.833 | 99.00th=[50594], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:32:18.833 | 99.99th=[50594] 00:32:18.833 bw ( KiB/s): min=20296, max=21352, per=26.65%, avg=20824.00, stdev=746.70, samples=2 00:32:18.833 iops : min= 5074, max= 5338, avg=5206.00, stdev=186.68, samples=2 00:32:18.833 lat (msec) : 2=0.02%, 4=0.74%, 10=39.69%, 20=52.39%, 50=6.50% 00:32:18.833 lat (msec) : 100=0.66% 00:32:18.833 cpu : usr=4.57%, sys=4.17%, ctx=518, majf=0, minf=1 00:32:18.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:18.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:18.833 issued rwts: total=5120,5334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:18.833 job1: (groupid=0, jobs=1): err= 0: pid=2227411: Mon Dec 9 16:05:13 2024 00:32:18.833 read: IOPS=4325, BW=16.9MiB/s (17.7MB/s)(16.9MiB/1003msec) 00:32:18.833 slat (nsec): min=1497, max=9566.1k, avg=83556.95, stdev=613535.23 00:32:18.833 clat (usec): min=1550, max=34166, avg=11003.60, stdev=3049.80 00:32:18.833 lat (usec): min=5274, max=42998, avg=11087.16, stdev=3103.12 00:32:18.833 clat percentiles (usec): 00:32:18.833 | 1.00th=[ 6587], 5.00th=[ 7373], 10.00th=[ 7898], 20.00th=[ 8848], 00:32:18.833 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[10683], 60.00th=[10814], 00:32:18.833 | 70.00th=[11469], 80.00th=[12256], 90.00th=[14222], 95.00th=[16909], 00:32:18.833 | 99.00th=[19792], 99.50th=[21103], 99.90th=[34341], 99.95th=[34341], 00:32:18.833 | 99.99th=[34341] 00:32:18.833 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:32:18.833 slat (usec): min=2, max=22488, avg=128.22, stdev=812.78 00:32:18.833 clat (usec): min=981, max=114490, avg=17225.34, stdev=20840.07 00:32:18.833 lat (usec): min=993, max=114503, avg=17353.56, stdev=20967.79 00:32:18.833 clat percentiles (msec): 00:32:18.833 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:32:18.833 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 12], 00:32:18.833 | 70.00th=[ 12], 80.00th=[ 15], 90.00th=[ 33], 95.00th=[ 73], 00:32:18.833 | 99.00th=[ 107], 99.50th=[ 111], 99.90th=[ 115], 99.95th=[ 115], 00:32:18.833 | 99.99th=[ 115] 00:32:18.833 bw ( KiB/s): min=17464, max=19400, per=23.59%, avg=18432.00, stdev=1368.96, samples=2 00:32:18.833 iops : min= 4366, max= 4850, avg=4608.00, stdev=342.24, samples=2 00:32:18.833 lat (usec) : 1000=0.02% 00:32:18.833 lat (msec) : 2=0.02%, 4=0.25%, 10=38.70%, 20=52.26%, 50=5.03% 00:32:18.833 lat (msec) : 100=2.58%, 250=1.14% 00:32:18.833 cpu : usr=4.09%, sys=4.79%, ctx=497, majf=0, minf=1 00:32:18.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:18.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:18.833 issued rwts: total=4338,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:18.833 job2: (groupid=0, jobs=1): err= 0: pid=2227412: Mon Dec 9 16:05:13 2024 00:32:18.833 read: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(17.3MiB/1006msec) 00:32:18.833 slat (nsec): min=1681, max=14712k, avg=105476.28, stdev=832958.28 00:32:18.833 clat (usec): min=3855, max=33239, avg=13926.45, stdev=4369.54 00:32:18.833 lat (usec): min=3863, max=33266, avg=14031.93, stdev=4436.27 00:32:18.833 clat percentiles (usec): 00:32:18.834 | 1.00th=[ 4178], 5.00th=[10028], 10.00th=[10552], 20.00th=[11207], 00:32:18.834 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[12780], 00:32:18.834 | 70.00th=[14353], 80.00th=[17433], 90.00th=[19006], 95.00th=[22414], 00:32:18.834 | 99.00th=[29230], 99.50th=[29754], 99.90th=[32113], 99.95th=[32375], 00:32:18.834 | 99.99th=[33162] 00:32:18.834 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:32:18.834 slat (usec): min=2, max=23094, avg=107.49, stdev=820.87 00:32:18.834 clat (usec): min=4033, max=47935, avg=14271.59, stdev=7146.47 00:32:18.834 lat (usec): min=4041, max=47953, avg=14379.08, stdev=7208.28 00:32:18.834 clat percentiles (usec): 00:32:18.834 | 1.00th=[ 6325], 5.00th=[ 7767], 10.00th=[ 9241], 20.00th=[10421], 00:32:18.834 | 30.00th=[11076], 40.00th=[11469], 50.00th=[12256], 60.00th=[12649], 00:32:18.834 | 70.00th=[13042], 80.00th=[16909], 90.00th=[20579], 95.00th=[31327], 00:32:18.834 | 99.00th=[43254], 99.50th=[45351], 99.90th=[47973], 99.95th=[47973], 00:32:18.834 | 99.99th=[47973] 00:32:18.834 bw ( KiB/s): min=16384, max=20480, per=23.59%, avg=18432.00, stdev=2896.31, samples=2 00:32:18.834 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:32:18.834 lat (msec) : 4=0.33%, 10=9.74%, 20=78.08%, 50=11.85% 00:32:18.834 cpu : usr=4.18%, sys=5.67%, ctx=326, majf=0, minf=1 00:32:18.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:18.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:18.834 issued rwts: total=4440,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:18.834 job3: (groupid=0, jobs=1): err= 0: pid=2227413: Mon Dec 9 16:05:13 2024 00:32:18.834 read: IOPS=4942, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1002msec) 00:32:18.834 slat (nsec): min=1501, max=32228k, avg=104317.71, stdev=667785.18 00:32:18.834 clat (usec): min=484, max=43371, avg=13109.68, stdev=4516.77 00:32:18.834 lat (usec): min=1664, max=44053, avg=13214.00, stdev=4529.34 00:32:18.834 clat percentiles (usec): 00:32:18.834 | 1.00th=[ 7767], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11338], 00:32:18.834 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12387], 60.00th=[12780], 00:32:18.834 | 70.00th=[13566], 80.00th=[14091], 90.00th=[15008], 95.00th=[15795], 00:32:18.834 | 99.00th=[38536], 99.50th=[38536], 99.90th=[43254], 99.95th=[43254], 00:32:18.834 | 99.99th=[43254] 00:32:18.834 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:32:18.834 slat (usec): min=2, max=5679, avg=89.95, stdev=444.09 00:32:18.834 clat (usec): min=6044, max=16554, avg=12075.49, stdev=1449.46 00:32:18.834 lat (usec): min=6055, max=16582, avg=12165.43, stdev=1430.63 00:32:18.834 clat percentiles (usec): 00:32:18.834 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[11076], 00:32:18.834 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12256], 00:32:18.834 | 70.00th=[13042], 80.00th=[13435], 90.00th=[13698], 95.00th=[14091], 00:32:18.834 | 99.00th=[15795], 99.50th=[16188], 99.90th=[16450], 99.95th=[16450], 00:32:18.834 | 99.99th=[16581] 00:32:18.834 bw ( KiB/s): min=20168, max=20792, per=26.21%, avg=20480.00, stdev=441.23, samples=2 00:32:18.834 iops : min= 5042, max= 5198, avg=5120.00, stdev=110.31, samples=2 00:32:18.834 lat (usec) : 500=0.01% 00:32:18.834 lat (msec) : 2=0.02%, 4=0.24%, 10=6.20%, 20=92.28%, 50=1.26% 00:32:18.834 cpu : usr=2.80%, sys=5.00%, ctx=561, majf=0, minf=1 00:32:18.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:18.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:18.834 issued rwts: total=4952,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:18.834 00:32:18.834 Run status group 0 (all jobs): 00:32:18.834 READ: bw=73.1MiB/s (76.7MB/s), 16.9MiB/s-19.9MiB/s (17.7MB/s-20.8MB/s), io=73.6MiB (77.2MB), run=1002-1007msec 00:32:18.834 WRITE: bw=76.3MiB/s (80.0MB/s), 17.9MiB/s-20.7MiB/s (18.8MB/s-21.7MB/s), io=76.8MiB (80.6MB), run=1002-1007msec 00:32:18.834 00:32:18.834 Disk stats (read/write): 00:32:18.834 nvme0n1: ios=3876/4096, merge=0/0, ticks=45661/51462, in_queue=97123, util=80.86% 00:32:18.834 nvme0n2: ios=3101/3559, merge=0/0, ticks=22277/38597, in_queue=60874, util=99.69% 00:32:18.834 nvme0n3: ios=3196/3584, merge=0/0, ticks=44823/50040, in_queue=94863, util=99.78% 00:32:18.834 nvme0n4: ios=3965/4096, merge=0/0, ticks=15547/13894, in_queue=29441, util=92.45% 00:32:18.834 16:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:18.834 16:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2227641 00:32:18.834 16:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:18.834 16:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:18.834 [global] 00:32:18.834 thread=1 00:32:18.834 invalidate=1 00:32:18.834 rw=read 00:32:18.834 time_based=1 00:32:18.834 runtime=10 00:32:18.834 ioengine=libaio 00:32:18.834 direct=1 00:32:18.834 bs=4096 00:32:18.834 iodepth=1 00:32:18.834 norandommap=1 00:32:18.834 numjobs=1 00:32:18.834 00:32:18.834 [job0] 00:32:18.834 filename=/dev/nvme0n1 00:32:18.834 [job1] 00:32:18.834 filename=/dev/nvme0n2 00:32:18.834 [job2] 00:32:18.834 filename=/dev/nvme0n3 00:32:18.834 [job3] 00:32:18.834 filename=/dev/nvme0n4 00:32:18.834 Could not set queue depth (nvme0n1) 00:32:18.834 Could not set queue depth (nvme0n2) 00:32:18.834 Could not set queue depth (nvme0n3) 00:32:18.834 Could not set queue depth (nvme0n4) 00:32:19.092 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:19.092 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:19.092 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:19.092 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:19.092 fio-3.35 00:32:19.092 Starting 4 threads 00:32:21.625 16:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:21.881 16:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:21.881 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=434176, buflen=4096 00:32:21.881 fio: pid=2227788, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:22.139 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=22597632, buflen=4096 00:32:22.139 fio: pid=2227787, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:22.139 16:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:22.139 16:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:22.398 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=53403648, buflen=4096 00:32:22.398 fio: pid=2227777, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:22.398 16:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:22.398 16:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:22.398 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=48001024, buflen=4096 00:32:22.398 fio: pid=2227780, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:22.398 16:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:22.398 16:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:22.657 00:32:22.657 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2227777: Mon Dec 9 16:05:17 2024 00:32:22.657 read: IOPS=4147, BW=16.2MiB/s (17.0MB/s)(50.9MiB/3144msec) 00:32:22.657 slat (usec): min=6, max=11652, avg= 9.97, stdev=132.71 00:32:22.657 clat (usec): min=177, max=578, avg=227.62, stdev=20.43 00:32:22.657 lat (usec): min=187, max=12076, avg=237.59, stdev=136.12 00:32:22.657 clat percentiles (usec): 00:32:22.657 | 1.00th=[ 190], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 215], 00:32:22.657 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 229], 00:32:22.657 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 251], 95.00th=[ 260], 00:32:22.657 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 338], 99.95th=[ 424], 00:32:22.657 | 99.99th=[ 510] 00:32:22.657 bw ( KiB/s): min=15635, max=17520, per=46.42%, avg=16729.83, stdev=703.64, samples=6 00:32:22.657 iops : min= 3908, max= 4380, avg=4182.33, stdev=176.14, samples=6 00:32:22.657 lat (usec) : 250=88.35%, 500=11.63%, 750=0.02% 00:32:22.657 cpu : usr=2.42%, sys=6.62%, ctx=13041, majf=0, minf=1 00:32:22.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:22.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.657 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.657 issued rwts: total=13039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:22.657 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2227780: Mon Dec 9 16:05:17 2024 00:32:22.657 read: IOPS=3475, BW=13.6MiB/s (14.2MB/s)(45.8MiB/3372msec) 00:32:22.657 slat (usec): min=6, max=15678, avg=10.01, stdev=183.89 00:32:22.657 clat (usec): min=176, max=42638, avg=274.55, stdev=1420.71 00:32:22.657 lat (usec): min=183, max=51987, avg=284.55, stdev=1459.23 00:32:22.657 clat percentiles (usec): 00:32:22.657 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 215], 00:32:22.657 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 223], 00:32:22.657 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 251], 00:32:22.657 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[40633], 99.95th=[41157], 00:32:22.657 | 99.99th=[42206] 00:32:22.657 bw ( KiB/s): min= 6331, max=17680, per=42.79%, avg=15421.83, stdev=4472.97, samples=6 00:32:22.657 iops : min= 1582, max= 4420, avg=3855.33, stdev=1118.55, samples=6 00:32:22.657 lat (usec) : 250=94.97%, 500=4.85%, 750=0.02% 00:32:22.657 lat (msec) : 2=0.01%, 10=0.01%, 20=0.01%, 50=0.12% 00:32:22.657 cpu : usr=0.98%, sys=3.14%, ctx=11724, majf=0, minf=2 00:32:22.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:22.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.657 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.657 issued rwts: total=11720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:22.657 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2227787: Mon Dec 9 16:05:17 2024 00:32:22.657 read: IOPS=1879, BW=7516KiB/s (7697kB/s)(21.6MiB/2936msec) 00:32:22.657 slat (nsec): min=4554, max=32416, avg=7510.71, stdev=1401.84 00:32:22.657 clat (usec): min=214, max=41466, avg=519.42, stdev=3231.98 00:32:22.657 lat (usec): min=220, max=41473, avg=526.93, stdev=3232.90 00:32:22.657 clat percentiles (usec): 00:32:22.657 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 245], 00:32:22.657 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:32:22.657 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:32:22.657 | 99.00th=[ 347], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:22.657 | 99.99th=[41681] 00:32:22.657 bw ( KiB/s): min= 144, max=14784, per=17.40%, avg=6270.40, stdev=7199.06, samples=5 00:32:22.657 iops : min= 36, max= 3696, avg=1567.60, stdev=1799.76, samples=5 00:32:22.657 lat (usec) : 250=40.70%, 500=58.59% 00:32:22.657 lat (msec) : 10=0.02%, 50=0.67% 00:32:22.657 cpu : usr=0.61%, sys=1.67%, ctx=5518, majf=0, minf=2 00:32:22.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:22.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.657 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.657 issued rwts: total=5518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:22.657 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2227788: Mon Dec 9 16:05:17 2024 00:32:22.657 read: IOPS=39, BW=155KiB/s (159kB/s)(424KiB/2739msec) 00:32:22.657 slat (nsec): min=8683, max=52792, avg=12631.49, stdev=4891.93 00:32:22.657 clat (usec): min=215, max=42013, avg=25619.65, stdev=19817.26 00:32:22.657 lat (usec): min=225, max=42024, avg=25632.28, stdev=19816.65 00:32:22.657 clat percentiles (usec): 00:32:22.657 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 253], 20.00th=[ 285], 00:32:22.657 | 30.00th=[ 326], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157], 00:32:22.657 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:32:22.657 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:22.657 | 99.99th=[42206] 00:32:22.657 bw ( KiB/s): min= 128, max= 176, per=0.42%, avg=152.00, stdev=19.60, samples=5 00:32:22.657 iops : min= 32, max= 44, avg=38.00, stdev= 4.90, samples=5 00:32:22.657 lat (usec) : 250=9.35%, 500=28.04% 00:32:22.657 lat (msec) : 50=61.68% 00:32:22.657 cpu : usr=0.11%, sys=0.00%, ctx=108, majf=0, minf=1 00:32:22.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:22.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.657 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.657 issued rwts: total=107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:22.657 00:32:22.657 Run status group 0 (all jobs): 00:32:22.657 READ: bw=35.2MiB/s (36.9MB/s), 155KiB/s-16.2MiB/s (159kB/s-17.0MB/s), io=119MiB (124MB), run=2739-3372msec 00:32:22.657 00:32:22.657 Disk stats (read/write): 00:32:22.657 nvme0n1: ios=12907/0, merge=0/0, ticks=2823/0, in_queue=2823, util=94.82% 00:32:22.658 nvme0n2: ios=11718/0, merge=0/0, ticks=3133/0, in_queue=3133, util=95.32% 00:32:22.658 nvme0n3: ios=5254/0, merge=0/0, ticks=2772/0, in_queue=2772, util=96.44% 00:32:22.658 nvme0n4: ios=103/0, merge=0/0, ticks=2591/0, in_queue=2591, util=96.44% 00:32:22.658 16:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:22.658 16:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:22.916 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:22.916 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:23.174 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:23.174 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:23.432 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:23.432 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:23.432 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:23.432 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2227641 00:32:23.432 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:23.432 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:23.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:23.691 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:23.691 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:32:23.691 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:23.691 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:23.691 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:23.691 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:23.691 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:32:23.691 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:23.691 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:23.691 nvmf hotplug test: fio failed as expected 00:32:23.691 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:23.950 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:23.950 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:23.950 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:23.950 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:23.950 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:23.950 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:23.950 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:23.950 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:23.950 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:23.950 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:23.950 16:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:23.950 rmmod nvme_tcp 00:32:23.950 rmmod nvme_fabrics 00:32:23.950 rmmod nvme_keyring 00:32:23.950 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:23.950 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:23.950 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:23.950 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2225021 ']' 00:32:23.950 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2225021 00:32:23.950 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2225021 ']' 00:32:23.950 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2225021 00:32:23.950 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:32:23.950 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:23.950 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2225021 00:32:23.950 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:23.950 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:23.950 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2225021' 00:32:23.950 killing process with pid 2225021 00:32:23.950 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2225021 00:32:23.950 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2225021 00:32:24.209 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:24.209 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:24.209 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:24.209 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:24.209 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:32:24.209 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:24.209 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:24.209 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:24.209 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:24.209 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.209 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:24.209 16:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:26.747 00:32:26.747 real 0m25.997s 00:32:26.747 user 1m32.512s 00:32:26.747 sys 0m11.237s 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:26.747 ************************************ 00:32:26.747 END TEST nvmf_fio_target 00:32:26.747 ************************************ 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:26.747 ************************************ 00:32:26.747 START TEST nvmf_bdevio 00:32:26.747 ************************************ 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:26.747 * Looking for test storage... 00:32:26.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:26.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.747 --rc genhtml_branch_coverage=1 00:32:26.747 --rc genhtml_function_coverage=1 00:32:26.747 --rc genhtml_legend=1 00:32:26.747 --rc geninfo_all_blocks=1 00:32:26.747 --rc geninfo_unexecuted_blocks=1 00:32:26.747 00:32:26.747 ' 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:26.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.747 --rc genhtml_branch_coverage=1 00:32:26.747 --rc genhtml_function_coverage=1 00:32:26.747 --rc genhtml_legend=1 00:32:26.747 --rc geninfo_all_blocks=1 00:32:26.747 --rc geninfo_unexecuted_blocks=1 00:32:26.747 00:32:26.747 ' 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:26.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.747 --rc genhtml_branch_coverage=1 00:32:26.747 --rc genhtml_function_coverage=1 00:32:26.747 --rc genhtml_legend=1 00:32:26.747 --rc geninfo_all_blocks=1 00:32:26.747 --rc geninfo_unexecuted_blocks=1 00:32:26.747 00:32:26.747 ' 00:32:26.747 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:26.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.748 --rc genhtml_branch_coverage=1 00:32:26.748 --rc genhtml_function_coverage=1 00:32:26.748 --rc genhtml_legend=1 00:32:26.748 --rc geninfo_all_blocks=1 00:32:26.748 --rc geninfo_unexecuted_blocks=1 00:32:26.748 00:32:26.748 ' 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:26.748 16:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:32.029 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:32.029 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:32.029 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:32.029 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:32.289 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:32.289 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:32.289 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:32.290 Found net devices under 0000:af:00.0: cvl_0_0 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:32.290 Found net devices under 0000:af:00.1: cvl_0_1 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:32.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:32.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:32:32.290 00:32:32.290 --- 10.0.0.2 ping statistics --- 00:32:32.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.290 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:32.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:32.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:32:32.290 00:32:32.290 --- 10.0.0.1 ping statistics --- 00:32:32.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.290 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:32.290 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:32.550 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:32.550 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:32.550 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.550 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:32.550 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2231985 00:32:32.550 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2231985 00:32:32.550 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:32.550 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2231985 ']' 00:32:32.550 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.550 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.550 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.550 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.550 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:32.550 [2024-12-09 16:05:27.596531] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:32.550 [2024-12-09 16:05:27.597420] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:32:32.550 [2024-12-09 16:05:27.597454] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:32.550 [2024-12-09 16:05:27.675272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:32.550 [2024-12-09 16:05:27.715219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:32.550 [2024-12-09 16:05:27.715272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:32.550 [2024-12-09 16:05:27.715280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:32.550 [2024-12-09 16:05:27.715286] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:32.550 [2024-12-09 16:05:27.715291] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:32.550 [2024-12-09 16:05:27.716886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:32.550 [2024-12-09 16:05:27.717003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:32.550 [2024-12-09 16:05:27.717111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:32.550 [2024-12-09 16:05:27.717113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:32.809 [2024-12-09 16:05:27.784941] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:32.809 [2024-12-09 16:05:27.785579] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:32.809 [2024-12-09 16:05:27.785766] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:32.809 [2024-12-09 16:05:27.785971] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:32.809 [2024-12-09 16:05:27.786029] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:32.809 [2024-12-09 16:05:27.865796] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:32.809 Malloc0 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:32.809 [2024-12-09 16:05:27.945855] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:32.809 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:32.809 { 00:32:32.809 "params": { 00:32:32.809 "name": "Nvme$subsystem", 00:32:32.810 "trtype": "$TEST_TRANSPORT", 00:32:32.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.810 "adrfam": "ipv4", 00:32:32.810 "trsvcid": "$NVMF_PORT", 00:32:32.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.810 "hdgst": ${hdgst:-false}, 00:32:32.810 "ddgst": ${ddgst:-false} 00:32:32.810 }, 00:32:32.810 "method": "bdev_nvme_attach_controller" 00:32:32.810 } 00:32:32.810 EOF 00:32:32.810 )") 00:32:32.810 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:32.810 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:32.810 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:32.810 16:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:32.810 "params": { 00:32:32.810 "name": "Nvme1", 00:32:32.810 "trtype": "tcp", 00:32:32.810 "traddr": "10.0.0.2", 00:32:32.810 "adrfam": "ipv4", 00:32:32.810 "trsvcid": "4420", 00:32:32.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:32.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:32.810 "hdgst": false, 00:32:32.810 "ddgst": false 00:32:32.810 }, 00:32:32.810 "method": "bdev_nvme_attach_controller" 00:32:32.810 }' 00:32:32.810 [2024-12-09 16:05:27.995069] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:32:32.810 [2024-12-09 16:05:27.995111] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2232202 ] 00:32:33.067 [2024-12-09 16:05:28.052665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:33.067 [2024-12-09 16:05:28.094924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.067 [2024-12-09 16:05:28.095030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.067 [2024-12-09 16:05:28.095030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:33.324 I/O targets: 00:32:33.324 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:33.324 00:32:33.324 00:32:33.324 CUnit - A unit testing framework for C - Version 2.1-3 00:32:33.324 http://cunit.sourceforge.net/ 00:32:33.324 00:32:33.324 00:32:33.324 Suite: bdevio tests on: Nvme1n1 00:32:33.324 Test: blockdev write read block ...passed 00:32:33.324 Test: blockdev write zeroes read block ...passed 00:32:33.324 Test: blockdev write zeroes read no split ...passed 00:32:33.324 Test: blockdev write zeroes read split ...passed 00:32:33.324 Test: blockdev write zeroes read split partial ...passed 00:32:33.324 Test: blockdev reset ...[2024-12-09 16:05:28.517533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:33.324 [2024-12-09 16:05:28.517600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd298b0 (9): Bad file descriptor 00:32:33.582 [2024-12-09 16:05:28.570327] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:33.582 passed 00:32:33.582 Test: blockdev write read 8 blocks ...passed 00:32:33.582 Test: blockdev write read size > 128k ...passed 00:32:33.582 Test: blockdev write read invalid size ...passed 00:32:33.582 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:33.582 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:33.582 Test: blockdev write read max offset ...passed 00:32:33.582 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:33.582 Test: blockdev writev readv 8 blocks ...passed 00:32:33.582 Test: blockdev writev readv 30 x 1block ...passed 00:32:33.582 Test: blockdev writev readv block ...passed 00:32:33.582 Test: blockdev writev readv size > 128k ...passed 00:32:33.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:33.582 Test: blockdev comparev and writev ...[2024-12-09 16:05:28.740987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:33.582 [2024-12-09 16:05:28.741013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.582 [2024-12-09 16:05:28.741026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:33.582 [2024-12-09 16:05:28.741034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:33.582 [2024-12-09 16:05:28.741322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:33.582 [2024-12-09 16:05:28.741332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:33.582 [2024-12-09 16:05:28.741343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:33.582 [2024-12-09 16:05:28.741351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:33.582 [2024-12-09 16:05:28.741631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:33.582 [2024-12-09 16:05:28.741641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:33.582 [2024-12-09 16:05:28.741652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:33.582 [2024-12-09 16:05:28.741660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:33.582 [2024-12-09 16:05:28.741957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:33.582 [2024-12-09 16:05:28.741967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:33.582 [2024-12-09 16:05:28.741978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:33.582 [2024-12-09 16:05:28.741986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:33.582 passed 00:32:33.840 Test: blockdev nvme passthru rw ...passed 00:32:33.840 Test: blockdev nvme passthru vendor specific ...[2024-12-09 16:05:28.824590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:33.840 [2024-12-09 16:05:28.824610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:33.840 [2024-12-09 16:05:28.824717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:33.840 [2024-12-09 16:05:28.824726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:33.840 [2024-12-09 16:05:28.824834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:33.840 [2024-12-09 16:05:28.824844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:33.840 [2024-12-09 16:05:28.824951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:33.840 [2024-12-09 16:05:28.824960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:33.840 passed 00:32:33.840 Test: blockdev nvme admin passthru ...passed 00:32:33.840 Test: blockdev copy ...passed 00:32:33.840 00:32:33.840 Run Summary: Type Total Ran Passed Failed Inactive 00:32:33.840 suites 1 1 n/a 0 0 00:32:33.840 tests 23 23 23 0 0 00:32:33.840 asserts 152 152 152 0 n/a 00:32:33.840 00:32:33.840 Elapsed time = 0.949 seconds 00:32:33.840 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:33.840 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.840 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:33.840 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.840 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:33.840 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:33.840 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:33.840 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:33.840 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:33.840 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:33.840 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:33.840 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:33.840 rmmod nvme_tcp 00:32:33.840 rmmod nvme_fabrics 00:32:33.840 rmmod nvme_keyring 00:32:34.099 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:34.099 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:34.099 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:34.099 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2231985 ']' 00:32:34.099 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2231985 00:32:34.099 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2231985 ']' 00:32:34.099 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2231985 00:32:34.100 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:34.100 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:34.100 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2231985 00:32:34.100 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:34.100 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:34.100 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2231985' 00:32:34.100 killing process with pid 2231985 00:32:34.100 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2231985 00:32:34.100 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2231985 00:32:34.359 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:34.359 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:34.359 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:34.359 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:34.359 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:34.359 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:34.359 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:34.359 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:34.359 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:34.359 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.359 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.359 16:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.266 16:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:36.266 00:32:36.266 real 0m9.984s 00:32:36.266 user 0m8.979s 00:32:36.266 sys 0m5.184s 00:32:36.266 16:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:36.266 16:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:36.266 ************************************ 00:32:36.266 END TEST nvmf_bdevio 00:32:36.266 ************************************ 00:32:36.266 16:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:36.266 00:32:36.266 real 4m32.452s 00:32:36.266 user 9m6.871s 00:32:36.266 sys 1m51.036s 00:32:36.266 16:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:36.266 16:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:36.266 ************************************ 00:32:36.266 END TEST nvmf_target_core_interrupt_mode 00:32:36.266 ************************************ 00:32:36.266 16:05:31 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:36.266 16:05:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:36.266 16:05:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:36.266 16:05:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:36.526 ************************************ 00:32:36.526 START TEST nvmf_interrupt 00:32:36.526 ************************************ 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:36.526 * Looking for test storage... 00:32:36.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:36.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.526 --rc genhtml_branch_coverage=1 00:32:36.526 --rc genhtml_function_coverage=1 00:32:36.526 --rc genhtml_legend=1 00:32:36.526 --rc geninfo_all_blocks=1 00:32:36.526 --rc geninfo_unexecuted_blocks=1 00:32:36.526 00:32:36.526 ' 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:36.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.526 --rc genhtml_branch_coverage=1 00:32:36.526 --rc genhtml_function_coverage=1 00:32:36.526 --rc genhtml_legend=1 00:32:36.526 --rc geninfo_all_blocks=1 00:32:36.526 --rc geninfo_unexecuted_blocks=1 00:32:36.526 00:32:36.526 ' 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:36.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.526 --rc genhtml_branch_coverage=1 00:32:36.526 --rc genhtml_function_coverage=1 00:32:36.526 --rc genhtml_legend=1 00:32:36.526 --rc geninfo_all_blocks=1 00:32:36.526 --rc geninfo_unexecuted_blocks=1 00:32:36.526 00:32:36.526 ' 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:36.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.526 --rc genhtml_branch_coverage=1 00:32:36.526 --rc genhtml_function_coverage=1 00:32:36.526 --rc genhtml_legend=1 00:32:36.526 --rc geninfo_all_blocks=1 00:32:36.526 --rc geninfo_unexecuted_blocks=1 00:32:36.526 00:32:36.526 ' 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:36.526 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:36.527 16:05:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:43.099 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:43.099 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:43.099 Found net devices under 0000:af:00.0: cvl_0_0 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:43.099 Found net devices under 0000:af:00.1: cvl_0_1 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:43.099 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:43.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:43.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:32:43.100 00:32:43.100 --- 10.0.0.2 ping statistics --- 00:32:43.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.100 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:43.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:43.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:32:43.100 00:32:43.100 --- 10.0.0.1 ping statistics --- 00:32:43.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.100 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2235738 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2235738 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2235738 ']' 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:43.100 [2024-12-09 16:05:37.605248] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:43.100 [2024-12-09 16:05:37.606181] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:32:43.100 [2024-12-09 16:05:37.606224] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.100 [2024-12-09 16:05:37.684812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:43.100 [2024-12-09 16:05:37.724985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.100 [2024-12-09 16:05:37.725017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.100 [2024-12-09 16:05:37.725024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.100 [2024-12-09 16:05:37.725030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.100 [2024-12-09 16:05:37.725035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.100 [2024-12-09 16:05:37.726187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.100 [2024-12-09 16:05:37.726191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.100 [2024-12-09 16:05:37.794434] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:43.100 [2024-12-09 16:05:37.794507] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:43.100 [2024-12-09 16:05:37.794634] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:43.100 5000+0 records in 00:32:43.100 5000+0 records out 00:32:43.100 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0165914 s, 617 MB/s 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:43.100 AIO0 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:43.100 [2024-12-09 16:05:37.930930] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:43.100 [2024-12-09 16:05:37.971274] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2235738 0 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2235738 0 idle 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2235738 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2235738 -w 256 00:32:43.100 16:05:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2235738 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.25 reactor_0' 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2235738 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.25 reactor_0 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2235738 1 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2235738 1 idle 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2235738 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:43.100 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:43.101 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:43.101 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:43.101 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:43.101 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:43.101 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:43.101 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2235738 -w 256 00:32:43.101 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2235742 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2235742 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2235992 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2235738 0 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2235738 0 busy 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2235738 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2235738 -w 256 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2235738 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.44 reactor_0' 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2235738 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.44 reactor_0 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2235738 1 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2235738 1 busy 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2235738 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2235738 -w 256 00:32:43.359 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:43.617 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2235742 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.27 reactor_1' 00:32:43.617 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2235742 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.27 reactor_1 00:32:43.617 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:43.617 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:43.617 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:43.617 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:43.617 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:43.617 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:43.617 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:43.617 16:05:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:43.617 16:05:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2235992 00:32:53.579 [2024-12-09 16:05:48.471736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81b90 is same with the state(6) to be set 00:32:53.579 [2024-12-09 16:05:48.471777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81b90 is same with the state(6) to be set 00:32:53.579 [2024-12-09 16:05:48.471785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81b90 is same with the state(6) to be set 00:32:53.579 [2024-12-09 16:05:48.471792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81b90 is same with the state(6) to be set 00:32:53.579 [2024-12-09 16:05:48.471798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81b90 is same with the state(6) to be set 00:32:53.579 [2024-12-09 16:05:48.471804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81b90 is same with the state(6) to be set 00:32:53.579 [2024-12-09 16:05:48.471809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81b90 is same with the state(6) to be set 00:32:53.579 [2024-12-09 16:05:48.471815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81b90 is same with the state(6) to be set 00:32:53.579 Initializing NVMe Controllers 00:32:53.579 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:53.579 Controller IO queue size 256, less than required. 00:32:53.579 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:53.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:53.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:53.579 Initialization complete. Launching workers. 00:32:53.579 ======================================================== 00:32:53.579 Latency(us) 00:32:53.579 Device Information : IOPS MiB/s Average min max 00:32:53.579 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16457.90 64.29 15561.55 4686.96 30982.48 00:32:53.579 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16669.00 65.11 15361.20 8425.14 55120.56 00:32:53.579 ======================================================== 00:32:53.579 Total : 33126.89 129.40 15460.73 4686.96 55120.56 00:32:53.579 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2235738 0 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2235738 0 idle 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2235738 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2235738 -w 256 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2235738 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.23 reactor_0' 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2235738 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.23 reactor_0 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2235738 1 00:32:53.579 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2235738 1 idle 00:32:53.580 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2235738 00:32:53.580 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:53.580 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:53.580 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:53.580 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:53.580 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:53.580 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:53.580 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:53.580 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:53.580 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:53.580 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2235738 -w 256 00:32:53.580 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:53.838 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2235742 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:32:53.838 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2235742 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:32:53.838 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:53.838 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:53.838 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:53.838 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:53.838 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:53.838 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:53.838 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:53.838 16:05:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:53.838 16:05:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:54.097 16:05:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:54.097 16:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:54.097 16:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:54.097 16:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:54.097 16:05:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2235738 0 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2235738 0 idle 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2235738 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2235738 -w 256 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2235738 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:20.47 reactor_0' 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2235738 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:20.47 reactor_0 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2235738 1 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2235738 1 idle 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2235738 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2235738 -w 256 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2235742 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.09 reactor_1' 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2235742 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.09 reactor_1 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:56.627 16:05:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:56.628 16:05:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:56.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:56.628 16:05:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:56.628 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:56.628 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:56.628 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:56.628 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:56.628 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:56.628 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:56.628 16:05:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:56.628 16:05:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:56.628 16:05:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:56.628 16:05:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:56.628 16:05:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:56.628 16:05:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:56.628 16:05:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:56.628 16:05:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:56.628 rmmod nvme_tcp 00:32:56.957 rmmod nvme_fabrics 00:32:56.957 rmmod nvme_keyring 00:32:56.957 16:05:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:56.957 16:05:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:56.957 16:05:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:56.957 16:05:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2235738 ']' 00:32:56.957 16:05:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2235738 00:32:56.957 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2235738 ']' 00:32:56.957 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2235738 00:32:56.957 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:56.957 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:56.957 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2235738 00:32:56.957 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:56.958 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:56.958 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2235738' 00:32:56.958 killing process with pid 2235738 00:32:56.958 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2235738 00:32:56.958 16:05:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2235738 00:32:57.285 16:05:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:57.285 16:05:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:57.285 16:05:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:57.285 16:05:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:57.285 16:05:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:57.285 16:05:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:57.285 16:05:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:57.285 16:05:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:57.285 16:05:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:57.285 16:05:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.286 16:05:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:57.286 16:05:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.215 16:05:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:59.215 00:32:59.215 real 0m22.728s 00:32:59.215 user 0m39.835s 00:32:59.215 sys 0m8.228s 00:32:59.215 16:05:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:59.215 16:05:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:59.215 ************************************ 00:32:59.215 END TEST nvmf_interrupt 00:32:59.215 ************************************ 00:32:59.215 00:32:59.215 real 27m23.622s 00:32:59.215 user 56m24.464s 00:32:59.215 sys 9m18.277s 00:32:59.215 16:05:54 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:59.215 16:05:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:59.215 ************************************ 00:32:59.215 END TEST nvmf_tcp 00:32:59.215 ************************************ 00:32:59.215 16:05:54 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:59.215 16:05:54 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:59.215 16:05:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:59.215 16:05:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:59.215 16:05:54 -- common/autotest_common.sh@10 -- # set +x 00:32:59.215 ************************************ 00:32:59.216 START TEST spdkcli_nvmf_tcp 00:32:59.216 ************************************ 00:32:59.216 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:59.475 * Looking for test storage... 00:32:59.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:59.475 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:59.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.476 --rc genhtml_branch_coverage=1 00:32:59.476 --rc genhtml_function_coverage=1 00:32:59.476 --rc genhtml_legend=1 00:32:59.476 --rc geninfo_all_blocks=1 00:32:59.476 --rc geninfo_unexecuted_blocks=1 00:32:59.476 00:32:59.476 ' 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:59.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.476 --rc genhtml_branch_coverage=1 00:32:59.476 --rc genhtml_function_coverage=1 00:32:59.476 --rc genhtml_legend=1 00:32:59.476 --rc geninfo_all_blocks=1 00:32:59.476 --rc geninfo_unexecuted_blocks=1 00:32:59.476 00:32:59.476 ' 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:59.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.476 --rc genhtml_branch_coverage=1 00:32:59.476 --rc genhtml_function_coverage=1 00:32:59.476 --rc genhtml_legend=1 00:32:59.476 --rc geninfo_all_blocks=1 00:32:59.476 --rc geninfo_unexecuted_blocks=1 00:32:59.476 00:32:59.476 ' 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:59.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.476 --rc genhtml_branch_coverage=1 00:32:59.476 --rc genhtml_function_coverage=1 00:32:59.476 --rc genhtml_legend=1 00:32:59.476 --rc geninfo_all_blocks=1 00:32:59.476 --rc geninfo_unexecuted_blocks=1 00:32:59.476 00:32:59.476 ' 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:59.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2238664 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2238664 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2238664 ']' 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:59.476 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:59.476 [2024-12-09 16:05:54.636076] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:32:59.476 [2024-12-09 16:05:54.636124] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2238664 ] 00:32:59.735 [2024-12-09 16:05:54.709150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:59.735 [2024-12-09 16:05:54.751640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:59.735 [2024-12-09 16:05:54.751643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.735 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:59.735 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:59.735 16:05:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:59.735 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:59.735 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:59.735 16:05:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:59.735 16:05:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:59.735 16:05:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:59.735 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:59.735 16:05:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:59.735 16:05:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:59.735 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:59.735 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:59.735 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:59.735 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:59.735 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:59.735 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:59.735 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:59.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:59.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:59.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:59.735 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:59.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:59.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:59.735 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:59.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:59.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:59.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:59.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:59.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:59.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:59.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:59.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:59.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:59.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:59.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:59.735 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:59.735 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:59.735 ' 00:33:03.019 [2024-12-09 16:05:57.566833] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:03.954 [2024-12-09 16:05:58.903182] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:06.483 [2024-12-09 16:06:01.386835] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:08.384 [2024-12-09 16:06:03.553596] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:10.284 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:10.284 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:10.284 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:10.284 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:10.284 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:10.284 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:10.284 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:10.284 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:10.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:10.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:10.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:10.284 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:10.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:10.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:10.284 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:10.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:10.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:10.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:10.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:10.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:10.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:10.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:10.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:10.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:10.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:10.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:10.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:10.284 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:10.284 16:06:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:10.284 16:06:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:10.284 16:06:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:10.284 16:06:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:10.284 16:06:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.284 16:06:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:10.284 16:06:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:10.284 16:06:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:10.542 16:06:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:10.800 16:06:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:10.800 16:06:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:10.800 16:06:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:10.800 16:06:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:10.800 16:06:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:10.800 16:06:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.800 16:06:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:10.800 16:06:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:10.800 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:10.800 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:10.800 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:10.800 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:10.800 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:10.800 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:10.800 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:10.800 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:10.800 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:10.800 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:10.800 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:10.800 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:10.800 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:10.800 ' 00:33:17.360 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:17.360 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:17.360 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:17.360 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:17.360 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:17.360 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:17.360 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:17.360 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:17.360 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:17.360 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:17.360 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:17.360 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:17.360 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:17.360 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:17.360 16:06:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:17.360 16:06:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2238664 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2238664 ']' 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2238664 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2238664 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2238664' 00:33:17.361 killing process with pid 2238664 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2238664 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2238664 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2238664 ']' 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2238664 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2238664 ']' 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2238664 00:33:17.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2238664) - No such process 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2238664 is not found' 00:33:17.361 Process with pid 2238664 is not found 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:17.361 00:33:17.361 real 0m17.319s 00:33:17.361 user 0m38.230s 00:33:17.361 sys 0m0.777s 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:17.361 16:06:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:17.361 ************************************ 00:33:17.361 END TEST spdkcli_nvmf_tcp 00:33:17.361 ************************************ 00:33:17.361 16:06:11 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:17.361 16:06:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:17.361 16:06:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:17.361 16:06:11 -- common/autotest_common.sh@10 -- # set +x 00:33:17.361 ************************************ 00:33:17.361 START TEST nvmf_identify_passthru 00:33:17.361 ************************************ 00:33:17.361 16:06:11 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:17.361 * Looking for test storage... 00:33:17.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:17.361 16:06:11 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:17.361 16:06:11 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:33:17.361 16:06:11 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:17.361 16:06:11 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:17.361 16:06:11 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:17.361 16:06:11 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:17.361 16:06:11 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:17.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.361 --rc genhtml_branch_coverage=1 00:33:17.361 --rc genhtml_function_coverage=1 00:33:17.361 --rc genhtml_legend=1 00:33:17.361 --rc geninfo_all_blocks=1 00:33:17.361 --rc geninfo_unexecuted_blocks=1 00:33:17.361 00:33:17.361 ' 00:33:17.361 16:06:11 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:17.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.361 --rc genhtml_branch_coverage=1 00:33:17.361 --rc genhtml_function_coverage=1 00:33:17.361 --rc genhtml_legend=1 00:33:17.361 --rc geninfo_all_blocks=1 00:33:17.361 --rc geninfo_unexecuted_blocks=1 00:33:17.361 00:33:17.361 ' 00:33:17.361 16:06:11 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:17.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.361 --rc genhtml_branch_coverage=1 00:33:17.361 --rc genhtml_function_coverage=1 00:33:17.361 --rc genhtml_legend=1 00:33:17.361 --rc geninfo_all_blocks=1 00:33:17.361 --rc geninfo_unexecuted_blocks=1 00:33:17.361 00:33:17.361 ' 00:33:17.361 16:06:11 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:17.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.361 --rc genhtml_branch_coverage=1 00:33:17.361 --rc genhtml_function_coverage=1 00:33:17.361 --rc genhtml_legend=1 00:33:17.361 --rc geninfo_all_blocks=1 00:33:17.361 --rc geninfo_unexecuted_blocks=1 00:33:17.361 00:33:17.361 ' 00:33:17.361 16:06:11 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:17.361 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:17.361 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:17.361 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:17.361 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:17.361 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:17.361 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:17.361 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:17.361 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:17.361 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:17.361 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:17.361 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:17.361 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:17.361 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:17.361 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:17.361 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:17.361 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:17.361 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.362 16:06:11 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:17.362 16:06:11 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.362 16:06:11 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.362 16:06:11 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.362 16:06:11 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.362 16:06:11 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.362 16:06:11 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.362 16:06:11 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:17.362 16:06:11 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:17.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:17.362 16:06:11 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.362 16:06:11 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:17.362 16:06:11 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.362 16:06:11 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.362 16:06:11 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.362 16:06:11 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.362 16:06:11 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.362 16:06:11 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.362 16:06:11 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:17.362 16:06:11 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.362 16:06:11 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.362 16:06:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:17.362 16:06:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:17.362 16:06:11 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:17.362 16:06:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:22.639 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:22.639 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:22.640 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:22.640 Found net devices under 0000:af:00.0: cvl_0_0 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:22.640 Found net devices under 0000:af:00.1: cvl_0_1 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:22.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:22.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:33:22.640 00:33:22.640 --- 10.0.0.2 ping statistics --- 00:33:22.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.640 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:22.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:22.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:33:22.640 00:33:22.640 --- 10.0.0.1 ping statistics --- 00:33:22.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.640 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:22.640 16:06:17 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:22.640 16:06:17 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:22.899 16:06:17 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:22.899 16:06:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:22.899 16:06:17 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:22.899 16:06:17 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:22.899 16:06:17 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:22.899 16:06:17 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:22.899 16:06:17 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:22.899 16:06:17 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:22.899 16:06:17 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:22.899 16:06:17 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:22.900 16:06:17 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:22.900 16:06:17 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:22.900 16:06:17 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:22.900 16:06:17 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:22.900 16:06:17 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:33:22.900 16:06:17 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:22.900 16:06:17 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:22.900 16:06:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:22.900 16:06:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:22.900 16:06:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:27.091 16:06:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ807001JM1P0FGN 00:33:27.091 16:06:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:27.091 16:06:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:27.091 16:06:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:31.282 16:06:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:31.282 16:06:26 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:31.282 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:31.283 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:31.283 16:06:26 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:31.283 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:31.283 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:31.283 16:06:26 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2246347 00:33:31.283 16:06:26 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:31.283 16:06:26 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:31.283 16:06:26 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2246347 00:33:31.283 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2246347 ']' 00:33:31.283 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.283 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:31.283 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.283 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:31.283 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:31.283 [2024-12-09 16:06:26.349413] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:33:31.283 [2024-12-09 16:06:26.349461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.283 [2024-12-09 16:06:26.427259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:31.283 [2024-12-09 16:06:26.468287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.283 [2024-12-09 16:06:26.468325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.283 [2024-12-09 16:06:26.468331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.283 [2024-12-09 16:06:26.468337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.283 [2024-12-09 16:06:26.468343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.283 [2024-12-09 16:06:26.469728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.283 [2024-12-09 16:06:26.469834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:31.283 [2024-12-09 16:06:26.469944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.283 [2024-12-09 16:06:26.469945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:31.283 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:31.283 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:31.283 16:06:26 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:31.283 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.283 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:31.283 INFO: Log level set to 20 00:33:31.283 INFO: Requests: 00:33:31.283 { 00:33:31.283 "jsonrpc": "2.0", 00:33:31.283 "method": "nvmf_set_config", 00:33:31.283 "id": 1, 00:33:31.283 "params": { 00:33:31.283 "admin_cmd_passthru": { 00:33:31.283 "identify_ctrlr": true 00:33:31.283 } 00:33:31.283 } 00:33:31.283 } 00:33:31.283 00:33:31.283 INFO: response: 00:33:31.283 { 00:33:31.283 "jsonrpc": "2.0", 00:33:31.283 "id": 1, 00:33:31.283 "result": true 00:33:31.283 } 00:33:31.283 00:33:31.283 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.283 16:06:26 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:31.283 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.283 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:31.283 INFO: Setting log level to 20 00:33:31.283 INFO: Setting log level to 20 00:33:31.283 INFO: Log level set to 20 00:33:31.283 INFO: Log level set to 20 00:33:31.542 INFO: Requests: 00:33:31.542 { 00:33:31.542 "jsonrpc": "2.0", 00:33:31.542 "method": "framework_start_init", 00:33:31.542 "id": 1 00:33:31.542 } 00:33:31.542 00:33:31.542 INFO: Requests: 00:33:31.542 { 00:33:31.542 "jsonrpc": "2.0", 00:33:31.542 "method": "framework_start_init", 00:33:31.542 "id": 1 00:33:31.542 } 00:33:31.542 00:33:31.542 [2024-12-09 16:06:26.582243] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:31.542 INFO: response: 00:33:31.542 { 00:33:31.542 "jsonrpc": "2.0", 00:33:31.542 "id": 1, 00:33:31.542 "result": true 00:33:31.542 } 00:33:31.542 00:33:31.542 INFO: response: 00:33:31.542 { 00:33:31.542 "jsonrpc": "2.0", 00:33:31.542 "id": 1, 00:33:31.542 "result": true 00:33:31.542 } 00:33:31.542 00:33:31.542 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.542 16:06:26 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:31.542 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.542 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:31.542 INFO: Setting log level to 40 00:33:31.542 INFO: Setting log level to 40 00:33:31.542 INFO: Setting log level to 40 00:33:31.542 [2024-12-09 16:06:26.595504] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:31.542 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.542 16:06:26 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:31.542 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:31.542 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:31.542 16:06:26 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:31.542 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.542 16:06:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:34.828 Nvme0n1 00:33:34.828 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.828 16:06:29 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:34.828 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.828 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:34.828 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.828 16:06:29 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:34.828 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.828 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:34.828 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.828 16:06:29 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:34.828 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.828 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:34.828 [2024-12-09 16:06:29.500554] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.828 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.828 16:06:29 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:34.828 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.828 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:34.828 [ 00:33:34.828 { 00:33:34.829 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:34.829 "subtype": "Discovery", 00:33:34.829 "listen_addresses": [], 00:33:34.829 "allow_any_host": true, 00:33:34.829 "hosts": [] 00:33:34.829 }, 00:33:34.829 { 00:33:34.829 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:34.829 "subtype": "NVMe", 00:33:34.829 "listen_addresses": [ 00:33:34.829 { 00:33:34.829 "trtype": "TCP", 00:33:34.829 "adrfam": "IPv4", 00:33:34.829 "traddr": "10.0.0.2", 00:33:34.829 "trsvcid": "4420" 00:33:34.829 } 00:33:34.829 ], 00:33:34.829 "allow_any_host": true, 00:33:34.829 "hosts": [], 00:33:34.829 "serial_number": "SPDK00000000000001", 00:33:34.829 "model_number": "SPDK bdev Controller", 00:33:34.829 "max_namespaces": 1, 00:33:34.829 "min_cntlid": 1, 00:33:34.829 "max_cntlid": 65519, 00:33:34.829 "namespaces": [ 00:33:34.829 { 00:33:34.829 "nsid": 1, 00:33:34.829 "bdev_name": "Nvme0n1", 00:33:34.829 "name": "Nvme0n1", 00:33:34.829 "nguid": "6FB68F285B7448D9854D5CC1FDF0DCFA", 00:33:34.829 "uuid": "6fb68f28-5b74-48d9-854d-5cc1fdf0dcfa" 00:33:34.829 } 00:33:34.829 ] 00:33:34.829 } 00:33:34.829 ] 00:33:34.829 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.829 16:06:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:34.829 16:06:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:34.829 16:06:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:34.829 16:06:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ807001JM1P0FGN 00:33:34.829 16:06:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:34.829 16:06:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:34.829 16:06:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:34.829 16:06:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:34.829 16:06:29 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ807001JM1P0FGN '!=' BTLJ807001JM1P0FGN ']' 00:33:34.829 16:06:29 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:34.829 16:06:29 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:34.829 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.829 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:34.829 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.829 16:06:29 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:34.829 16:06:29 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:34.829 16:06:29 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:34.829 16:06:29 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:34.829 16:06:29 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:34.829 16:06:29 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:34.829 16:06:29 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:34.829 16:06:29 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:34.829 rmmod nvme_tcp 00:33:34.829 rmmod nvme_fabrics 00:33:34.829 rmmod nvme_keyring 00:33:34.829 16:06:29 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:34.829 16:06:29 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:34.829 16:06:29 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:34.829 16:06:29 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2246347 ']' 00:33:34.829 16:06:29 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2246347 00:33:34.829 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2246347 ']' 00:33:34.829 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2246347 00:33:34.829 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:34.829 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:34.829 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2246347 00:33:34.829 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:34.829 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:34.829 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2246347' 00:33:34.829 killing process with pid 2246347 00:33:34.829 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2246347 00:33:34.829 16:06:29 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2246347 00:33:36.731 16:06:31 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:36.732 16:06:31 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:36.732 16:06:31 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:36.732 16:06:31 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:36.732 16:06:31 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:36.732 16:06:31 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:36.732 16:06:31 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:36.732 16:06:31 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:36.732 16:06:31 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:36.732 16:06:31 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.732 16:06:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:36.732 16:06:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.638 16:06:33 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:38.638 00:33:38.638 real 0m21.774s 00:33:38.638 user 0m26.678s 00:33:38.638 sys 0m6.134s 00:33:38.638 16:06:33 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:38.638 16:06:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:38.638 ************************************ 00:33:38.638 END TEST nvmf_identify_passthru 00:33:38.638 ************************************ 00:33:38.638 16:06:33 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:38.638 16:06:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:38.638 16:06:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:38.638 16:06:33 -- common/autotest_common.sh@10 -- # set +x 00:33:38.638 ************************************ 00:33:38.638 START TEST nvmf_dif 00:33:38.638 ************************************ 00:33:38.638 16:06:33 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:38.638 * Looking for test storage... 00:33:38.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:38.638 16:06:33 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:38.638 16:06:33 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:33:38.638 16:06:33 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:38.638 16:06:33 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.638 16:06:33 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:38.638 16:06:33 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.638 16:06:33 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:38.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.638 --rc genhtml_branch_coverage=1 00:33:38.638 --rc genhtml_function_coverage=1 00:33:38.638 --rc genhtml_legend=1 00:33:38.638 --rc geninfo_all_blocks=1 00:33:38.638 --rc geninfo_unexecuted_blocks=1 00:33:38.638 00:33:38.638 ' 00:33:38.638 16:06:33 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:38.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.638 --rc genhtml_branch_coverage=1 00:33:38.638 --rc genhtml_function_coverage=1 00:33:38.638 --rc genhtml_legend=1 00:33:38.638 --rc geninfo_all_blocks=1 00:33:38.638 --rc geninfo_unexecuted_blocks=1 00:33:38.638 00:33:38.638 ' 00:33:38.639 16:06:33 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:38.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.639 --rc genhtml_branch_coverage=1 00:33:38.639 --rc genhtml_function_coverage=1 00:33:38.639 --rc genhtml_legend=1 00:33:38.639 --rc geninfo_all_blocks=1 00:33:38.639 --rc geninfo_unexecuted_blocks=1 00:33:38.639 00:33:38.639 ' 00:33:38.639 16:06:33 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:38.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.639 --rc genhtml_branch_coverage=1 00:33:38.639 --rc genhtml_function_coverage=1 00:33:38.639 --rc genhtml_legend=1 00:33:38.639 --rc geninfo_all_blocks=1 00:33:38.639 --rc geninfo_unexecuted_blocks=1 00:33:38.639 00:33:38.639 ' 00:33:38.639 16:06:33 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.639 16:06:33 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.639 16:06:33 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.639 16:06:33 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.639 16:06:33 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.639 16:06:33 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.639 16:06:33 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.639 16:06:33 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.639 16:06:33 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:38.639 16:06:33 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:38.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.639 16:06:33 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:38.639 16:06:33 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:38.639 16:06:33 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:38.639 16:06:33 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:38.639 16:06:33 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.639 16:06:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:38.639 16:06:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:38.639 16:06:33 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:38.639 16:06:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:45.222 16:06:39 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:45.223 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:45.223 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:45.223 Found net devices under 0000:af:00.0: cvl_0_0 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:45.223 Found net devices under 0000:af:00.1: cvl_0_1 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:45.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:45.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:33:45.223 00:33:45.223 --- 10.0.0.2 ping statistics --- 00:33:45.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.223 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:45.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:45.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:33:45.223 00:33:45.223 --- 10.0.0.1 ping statistics --- 00:33:45.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.223 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:45.223 16:06:39 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:47.128 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:33:47.388 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:47.388 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:47.388 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:47.388 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:47.388 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:47.388 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:47.388 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:47.388 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:47.388 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:47.388 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:47.388 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:47.388 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:47.388 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:47.388 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:47.388 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:47.388 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:47.388 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:47.647 16:06:42 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:47.647 16:06:42 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:47.647 16:06:42 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:47.647 16:06:42 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:47.647 16:06:42 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:47.647 16:06:42 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:47.647 16:06:42 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:47.647 16:06:42 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:47.647 16:06:42 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:47.647 16:06:42 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:47.647 16:06:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:47.647 16:06:42 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2251795 00:33:47.647 16:06:42 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:47.647 16:06:42 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2251795 00:33:47.647 16:06:42 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2251795 ']' 00:33:47.647 16:06:42 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.647 16:06:42 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:47.647 16:06:42 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.647 16:06:42 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:47.647 16:06:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:47.647 [2024-12-09 16:06:42.731017] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:33:47.647 [2024-12-09 16:06:42.731067] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.647 [2024-12-09 16:06:42.810624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.647 [2024-12-09 16:06:42.849961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.647 [2024-12-09 16:06:42.849997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.647 [2024-12-09 16:06:42.850004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:47.647 [2024-12-09 16:06:42.850010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:47.647 [2024-12-09 16:06:42.850015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.647 [2024-12-09 16:06:42.850552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.906 16:06:42 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:47.906 16:06:42 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:47.906 16:06:42 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:47.906 16:06:42 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:47.906 16:06:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:47.906 16:06:42 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:47.906 16:06:42 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:47.906 16:06:42 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:47.906 16:06:42 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.906 16:06:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:47.906 [2024-12-09 16:06:42.987198] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.906 16:06:42 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.906 16:06:42 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:47.906 16:06:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:47.906 16:06:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:47.906 16:06:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:47.906 ************************************ 00:33:47.906 START TEST fio_dif_1_default 00:33:47.906 ************************************ 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:47.906 bdev_null0 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:47.906 [2024-12-09 16:06:43.063510] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:47.906 { 00:33:47.906 "params": { 00:33:47.906 "name": "Nvme$subsystem", 00:33:47.906 "trtype": "$TEST_TRANSPORT", 00:33:47.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:47.906 "adrfam": "ipv4", 00:33:47.906 "trsvcid": "$NVMF_PORT", 00:33:47.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:47.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:47.906 "hdgst": ${hdgst:-false}, 00:33:47.906 "ddgst": ${ddgst:-false} 00:33:47.906 }, 00:33:47.906 "method": "bdev_nvme_attach_controller" 00:33:47.906 } 00:33:47.906 EOF 00:33:47.906 )") 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:47.906 16:06:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:47.907 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.907 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:47.907 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:47.907 16:06:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:47.907 16:06:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:47.907 16:06:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:47.907 "params": { 00:33:47.907 "name": "Nvme0", 00:33:47.907 "trtype": "tcp", 00:33:47.907 "traddr": "10.0.0.2", 00:33:47.907 "adrfam": "ipv4", 00:33:47.907 "trsvcid": "4420", 00:33:47.907 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:47.907 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:47.907 "hdgst": false, 00:33:47.907 "ddgst": false 00:33:47.907 }, 00:33:47.907 "method": "bdev_nvme_attach_controller" 00:33:47.907 }' 00:33:47.907 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:47.907 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:47.907 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:47.907 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:47.907 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:47.907 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:48.189 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:48.189 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:48.189 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:48.189 16:06:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:48.449 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:48.449 fio-3.35 00:33:48.449 Starting 1 thread 00:34:00.656 00:34:00.656 filename0: (groupid=0, jobs=1): err= 0: pid=2252164: Mon Dec 9 16:06:54 2024 00:34:00.656 read: IOPS=203, BW=815KiB/s (835kB/s)(8176KiB/10031msec) 00:34:00.656 slat (nsec): min=5817, max=27762, avg=6132.05, stdev=1201.22 00:34:00.656 clat (usec): min=361, max=42569, avg=19612.12, stdev=20503.54 00:34:00.656 lat (usec): min=367, max=42576, avg=19618.25, stdev=20503.52 00:34:00.656 clat percentiles (usec): 00:34:00.656 | 1.00th=[ 392], 5.00th=[ 433], 10.00th=[ 469], 20.00th=[ 490], 00:34:00.656 | 30.00th=[ 586], 40.00th=[ 603], 50.00th=[ 619], 60.00th=[41157], 00:34:00.656 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:34:00.656 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:00.656 | 99.99th=[42730] 00:34:00.656 bw ( KiB/s): min= 640, max= 1152, per=100.00%, avg=816.00, stdev=124.37, samples=20 00:34:00.656 iops : min= 160, max= 288, avg=204.00, stdev=31.09, samples=20 00:34:00.656 lat (usec) : 500=21.48%, 750=31.95% 00:34:00.656 lat (msec) : 4=0.20%, 50=46.38% 00:34:00.656 cpu : usr=92.25%, sys=7.50%, ctx=10, majf=0, minf=0 00:34:00.656 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:00.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.656 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.656 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:00.656 00:34:00.656 Run status group 0 (all jobs): 00:34:00.656 READ: bw=815KiB/s (835kB/s), 815KiB/s-815KiB/s (835kB/s-835kB/s), io=8176KiB (8372kB), run=10031-10031msec 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.656 00:34:00.656 real 0m11.246s 00:34:00.656 user 0m16.293s 00:34:00.656 sys 0m1.052s 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:00.656 ************************************ 00:34:00.656 END TEST fio_dif_1_default 00:34:00.656 ************************************ 00:34:00.656 16:06:54 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:00.656 16:06:54 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:00.656 16:06:54 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:00.656 16:06:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:00.656 ************************************ 00:34:00.656 START TEST fio_dif_1_multi_subsystems 00:34:00.656 ************************************ 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:00.656 bdev_null0 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:00.656 [2024-12-09 16:06:54.377752] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:00.656 bdev_null1 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:00.656 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:00.657 { 00:34:00.657 "params": { 00:34:00.657 "name": "Nvme$subsystem", 00:34:00.657 "trtype": "$TEST_TRANSPORT", 00:34:00.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:00.657 "adrfam": "ipv4", 00:34:00.657 "trsvcid": "$NVMF_PORT", 00:34:00.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:00.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:00.657 "hdgst": ${hdgst:-false}, 00:34:00.657 "ddgst": ${ddgst:-false} 00:34:00.657 }, 00:34:00.657 "method": "bdev_nvme_attach_controller" 00:34:00.657 } 00:34:00.657 EOF 00:34:00.657 )") 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:00.657 { 00:34:00.657 "params": { 00:34:00.657 "name": "Nvme$subsystem", 00:34:00.657 "trtype": "$TEST_TRANSPORT", 00:34:00.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:00.657 "adrfam": "ipv4", 00:34:00.657 "trsvcid": "$NVMF_PORT", 00:34:00.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:00.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:00.657 "hdgst": ${hdgst:-false}, 00:34:00.657 "ddgst": ${ddgst:-false} 00:34:00.657 }, 00:34:00.657 "method": "bdev_nvme_attach_controller" 00:34:00.657 } 00:34:00.657 EOF 00:34:00.657 )") 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:00.657 "params": { 00:34:00.657 "name": "Nvme0", 00:34:00.657 "trtype": "tcp", 00:34:00.657 "traddr": "10.0.0.2", 00:34:00.657 "adrfam": "ipv4", 00:34:00.657 "trsvcid": "4420", 00:34:00.657 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:00.657 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:00.657 "hdgst": false, 00:34:00.657 "ddgst": false 00:34:00.657 }, 00:34:00.657 "method": "bdev_nvme_attach_controller" 00:34:00.657 },{ 00:34:00.657 "params": { 00:34:00.657 "name": "Nvme1", 00:34:00.657 "trtype": "tcp", 00:34:00.657 "traddr": "10.0.0.2", 00:34:00.657 "adrfam": "ipv4", 00:34:00.657 "trsvcid": "4420", 00:34:00.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:00.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:00.657 "hdgst": false, 00:34:00.657 "ddgst": false 00:34:00.657 }, 00:34:00.657 "method": "bdev_nvme_attach_controller" 00:34:00.657 }' 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:00.657 16:06:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:00.657 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:00.657 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:00.657 fio-3.35 00:34:00.657 Starting 2 threads 00:34:10.623 00:34:10.623 filename0: (groupid=0, jobs=1): err= 0: pid=2254108: Mon Dec 9 16:07:05 2024 00:34:10.623 read: IOPS=98, BW=393KiB/s (402kB/s)(3936KiB/10019msec) 00:34:10.623 slat (nsec): min=5834, max=28545, avg=7824.65, stdev=3077.31 00:34:10.623 clat (usec): min=419, max=42445, avg=40703.77, stdev=3653.17 00:34:10.623 lat (usec): min=425, max=42452, avg=40711.59, stdev=3653.21 00:34:10.623 clat percentiles (usec): 00:34:10.623 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:10.623 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:10.623 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:34:10.623 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:10.623 | 99.99th=[42206] 00:34:10.623 bw ( KiB/s): min= 384, max= 448, per=50.10%, avg=392.00, stdev=17.60, samples=20 00:34:10.623 iops : min= 96, max= 112, avg=98.00, stdev= 4.40, samples=20 00:34:10.623 lat (usec) : 500=0.81% 00:34:10.623 lat (msec) : 50=99.19% 00:34:10.623 cpu : usr=96.31%, sys=3.44%, ctx=14, majf=0, minf=50 00:34:10.623 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.623 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.623 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:10.623 filename1: (groupid=0, jobs=1): err= 0: pid=2254109: Mon Dec 9 16:07:05 2024 00:34:10.623 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:34:10.623 slat (nsec): min=5842, max=30039, avg=7765.30, stdev=3018.73 00:34:10.623 clat (usec): min=40834, max=42085, avg=41009.20, stdev=179.63 00:34:10.623 lat (usec): min=40840, max=42099, avg=41016.97, stdev=179.94 00:34:10.623 clat percentiles (usec): 00:34:10.623 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:10.623 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:10.623 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:10.623 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:10.623 | 99.99th=[42206] 00:34:10.623 bw ( KiB/s): min= 384, max= 416, per=49.58%, avg=388.80, stdev=11.72, samples=20 00:34:10.623 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:10.623 lat (msec) : 50=100.00% 00:34:10.623 cpu : usr=97.18%, sys=2.57%, ctx=7, majf=0, minf=160 00:34:10.623 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.623 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.623 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:10.623 00:34:10.623 Run status group 0 (all jobs): 00:34:10.623 READ: bw=783KiB/s (801kB/s), 390KiB/s-393KiB/s (399kB/s-402kB/s), io=7840KiB (8028kB), run=10012-10019msec 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.624 00:34:10.624 real 0m11.499s 00:34:10.624 user 0m26.920s 00:34:10.624 sys 0m0.908s 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.624 16:07:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:10.624 ************************************ 00:34:10.624 END TEST fio_dif_1_multi_subsystems 00:34:10.624 ************************************ 00:34:10.883 16:07:05 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:10.883 16:07:05 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:10.883 16:07:05 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.883 16:07:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:10.883 ************************************ 00:34:10.883 START TEST fio_dif_rand_params 00:34:10.883 ************************************ 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.883 bdev_null0 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:10.883 [2024-12-09 16:07:05.948458] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:10.883 { 00:34:10.883 "params": { 00:34:10.883 "name": "Nvme$subsystem", 00:34:10.883 "trtype": "$TEST_TRANSPORT", 00:34:10.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.883 "adrfam": "ipv4", 00:34:10.883 "trsvcid": "$NVMF_PORT", 00:34:10.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.883 "hdgst": ${hdgst:-false}, 00:34:10.883 "ddgst": ${ddgst:-false} 00:34:10.883 }, 00:34:10.883 "method": "bdev_nvme_attach_controller" 00:34:10.883 } 00:34:10.883 EOF 00:34:10.883 )") 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:10.883 16:07:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:10.883 "params": { 00:34:10.883 "name": "Nvme0", 00:34:10.883 "trtype": "tcp", 00:34:10.883 "traddr": "10.0.0.2", 00:34:10.883 "adrfam": "ipv4", 00:34:10.883 "trsvcid": "4420", 00:34:10.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:10.884 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:10.884 "hdgst": false, 00:34:10.884 "ddgst": false 00:34:10.884 }, 00:34:10.884 "method": "bdev_nvme_attach_controller" 00:34:10.884 }' 00:34:10.884 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:10.884 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:10.884 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.884 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.884 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:10.884 16:07:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:10.884 16:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:10.884 16:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:10.884 16:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:10.884 16:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:11.142 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:11.142 ... 00:34:11.142 fio-3.35 00:34:11.142 Starting 3 threads 00:34:17.855 00:34:17.855 filename0: (groupid=0, jobs=1): err= 0: pid=2256044: Mon Dec 9 16:07:12 2024 00:34:17.855 read: IOPS=327, BW=41.0MiB/s (43.0MB/s)(205MiB/5008msec) 00:34:17.855 slat (nsec): min=6151, max=32131, avg=10847.45, stdev=2138.08 00:34:17.855 clat (usec): min=3189, max=51475, avg=9133.53, stdev=4880.60 00:34:17.855 lat (usec): min=3195, max=51490, avg=9144.38, stdev=4880.71 00:34:17.855 clat percentiles (usec): 00:34:17.855 | 1.00th=[ 3851], 5.00th=[ 5997], 10.00th=[ 6783], 20.00th=[ 7701], 00:34:17.855 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:34:17.855 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10814], 00:34:17.855 | 99.00th=[45876], 99.50th=[48497], 99.90th=[50594], 99.95th=[51643], 00:34:17.855 | 99.99th=[51643] 00:34:17.855 bw ( KiB/s): min=34048, max=44544, per=35.69%, avg=41984.00, stdev=3067.26, samples=10 00:34:17.855 iops : min= 266, max= 348, avg=328.00, stdev=23.96, samples=10 00:34:17.855 lat (msec) : 4=1.46%, 10=84.96%, 20=12.12%, 50=1.28%, 100=0.18% 00:34:17.855 cpu : usr=94.43%, sys=5.27%, ctx=8, majf=0, minf=9 00:34:17.855 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:17.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.855 issued rwts: total=1642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.855 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:17.855 filename0: (groupid=0, jobs=1): err= 0: pid=2256045: Mon Dec 9 16:07:12 2024 00:34:17.855 read: IOPS=317, BW=39.7MiB/s (41.7MB/s)(199MiB/5004msec) 00:34:17.855 slat (nsec): min=6185, max=28759, avg=11217.84, stdev=2170.56 00:34:17.855 clat (usec): min=3298, max=50634, avg=9420.16, stdev=4537.18 00:34:17.855 lat (usec): min=3306, max=50647, avg=9431.38, stdev=4537.40 00:34:17.855 clat percentiles (usec): 00:34:17.855 | 1.00th=[ 3589], 5.00th=[ 5276], 10.00th=[ 6521], 20.00th=[ 7963], 00:34:17.855 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:34:17.855 | 70.00th=[10028], 80.00th=[10421], 90.00th=[11076], 95.00th=[11469], 00:34:17.855 | 99.00th=[45351], 99.50th=[49021], 99.90th=[50070], 99.95th=[50594], 00:34:17.855 | 99.99th=[50594] 00:34:17.855 bw ( KiB/s): min=34304, max=48896, per=34.58%, avg=40678.40, stdev=3637.34, samples=10 00:34:17.856 iops : min= 268, max= 382, avg=317.80, stdev=28.42, samples=10 00:34:17.856 lat (msec) : 4=3.65%, 10=65.81%, 20=29.42%, 50=0.88%, 100=0.25% 00:34:17.856 cpu : usr=94.66%, sys=4.94%, ctx=14, majf=0, minf=9 00:34:17.856 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:17.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.856 issued rwts: total=1591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.856 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:17.856 filename0: (groupid=0, jobs=1): err= 0: pid=2256046: Mon Dec 9 16:07:12 2024 00:34:17.856 read: IOPS=277, BW=34.7MiB/s (36.4MB/s)(175MiB/5044msec) 00:34:17.856 slat (nsec): min=6127, max=30344, avg=11224.16, stdev=2204.80 00:34:17.856 clat (usec): min=3311, max=87858, avg=10750.82, stdev=6123.25 00:34:17.856 lat (usec): min=3317, max=87871, avg=10762.05, stdev=6123.07 00:34:17.856 clat percentiles (usec): 00:34:17.856 | 1.00th=[ 3884], 5.00th=[ 6521], 10.00th=[ 7832], 20.00th=[ 8717], 00:34:17.856 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10552], 00:34:17.856 | 70.00th=[10945], 80.00th=[11338], 90.00th=[12125], 95.00th=[12649], 00:34:17.856 | 99.00th=[48497], 99.50th=[49546], 99.90th=[50594], 99.95th=[87557], 00:34:17.856 | 99.99th=[87557] 00:34:17.856 bw ( KiB/s): min=31488, max=41984, per=30.47%, avg=35840.00, stdev=3119.05, samples=10 00:34:17.856 iops : min= 246, max= 328, avg=280.00, stdev=24.37, samples=10 00:34:17.856 lat (msec) : 4=1.93%, 10=44.72%, 20=51.14%, 50=1.85%, 100=0.36% 00:34:17.856 cpu : usr=94.82%, sys=4.88%, ctx=11, majf=0, minf=9 00:34:17.856 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:17.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:17.856 issued rwts: total=1402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:17.856 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:17.856 00:34:17.856 Run status group 0 (all jobs): 00:34:17.856 READ: bw=115MiB/s (120MB/s), 34.7MiB/s-41.0MiB/s (36.4MB/s-43.0MB/s), io=579MiB (608MB), run=5004-5044msec 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.856 bdev_null0 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.856 [2024-12-09 16:07:12.353443] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.856 bdev_null1 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.856 bdev_null2 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:17.856 16:07:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:17.856 { 00:34:17.856 "params": { 00:34:17.856 "name": "Nvme$subsystem", 00:34:17.856 "trtype": "$TEST_TRANSPORT", 00:34:17.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.856 "adrfam": "ipv4", 00:34:17.856 "trsvcid": "$NVMF_PORT", 00:34:17.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.856 "hdgst": ${hdgst:-false}, 00:34:17.857 "ddgst": ${ddgst:-false} 00:34:17.857 }, 00:34:17.857 "method": "bdev_nvme_attach_controller" 00:34:17.857 } 00:34:17.857 EOF 00:34:17.857 )") 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:17.857 { 00:34:17.857 "params": { 00:34:17.857 "name": "Nvme$subsystem", 00:34:17.857 "trtype": "$TEST_TRANSPORT", 00:34:17.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.857 "adrfam": "ipv4", 00:34:17.857 "trsvcid": "$NVMF_PORT", 00:34:17.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.857 "hdgst": ${hdgst:-false}, 00:34:17.857 "ddgst": ${ddgst:-false} 00:34:17.857 }, 00:34:17.857 "method": "bdev_nvme_attach_controller" 00:34:17.857 } 00:34:17.857 EOF 00:34:17.857 )") 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:17.857 { 00:34:17.857 "params": { 00:34:17.857 "name": "Nvme$subsystem", 00:34:17.857 "trtype": "$TEST_TRANSPORT", 00:34:17.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.857 "adrfam": "ipv4", 00:34:17.857 "trsvcid": "$NVMF_PORT", 00:34:17.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.857 "hdgst": ${hdgst:-false}, 00:34:17.857 "ddgst": ${ddgst:-false} 00:34:17.857 }, 00:34:17.857 "method": "bdev_nvme_attach_controller" 00:34:17.857 } 00:34:17.857 EOF 00:34:17.857 )") 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:17.857 "params": { 00:34:17.857 "name": "Nvme0", 00:34:17.857 "trtype": "tcp", 00:34:17.857 "traddr": "10.0.0.2", 00:34:17.857 "adrfam": "ipv4", 00:34:17.857 "trsvcid": "4420", 00:34:17.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:17.857 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:17.857 "hdgst": false, 00:34:17.857 "ddgst": false 00:34:17.857 }, 00:34:17.857 "method": "bdev_nvme_attach_controller" 00:34:17.857 },{ 00:34:17.857 "params": { 00:34:17.857 "name": "Nvme1", 00:34:17.857 "trtype": "tcp", 00:34:17.857 "traddr": "10.0.0.2", 00:34:17.857 "adrfam": "ipv4", 00:34:17.857 "trsvcid": "4420", 00:34:17.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:17.857 "hdgst": false, 00:34:17.857 "ddgst": false 00:34:17.857 }, 00:34:17.857 "method": "bdev_nvme_attach_controller" 00:34:17.857 },{ 00:34:17.857 "params": { 00:34:17.857 "name": "Nvme2", 00:34:17.857 "trtype": "tcp", 00:34:17.857 "traddr": "10.0.0.2", 00:34:17.857 "adrfam": "ipv4", 00:34:17.857 "trsvcid": "4420", 00:34:17.857 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:17.857 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:17.857 "hdgst": false, 00:34:17.857 "ddgst": false 00:34:17.857 }, 00:34:17.857 "method": "bdev_nvme_attach_controller" 00:34:17.857 }' 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:17.857 16:07:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:17.857 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:17.857 ... 00:34:17.857 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:17.857 ... 00:34:17.857 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:17.857 ... 00:34:17.857 fio-3.35 00:34:17.857 Starting 24 threads 00:34:30.055 00:34:30.055 filename0: (groupid=0, jobs=1): err= 0: pid=2257307: Mon Dec 9 16:07:23 2024 00:34:30.055 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10008msec) 00:34:30.055 slat (nsec): min=7343, max=94457, avg=28893.37, stdev=18450.62 00:34:30.055 clat (usec): min=6978, max=31801, avg=30153.91, stdev=2094.98 00:34:30.055 lat (usec): min=6988, max=31817, avg=30182.80, stdev=2095.49 00:34:30.055 clat percentiles (usec): 00:34:30.055 | 1.00th=[16909], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:34:30.055 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:34:30.055 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:30.055 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31589], 99.95th=[31851], 00:34:30.055 | 99.99th=[31851] 00:34:30.055 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2099.20, stdev=76.58, samples=20 00:34:30.056 iops : min= 512, max= 576, avg=524.80, stdev=19.14, samples=20 00:34:30.056 lat (msec) : 10=0.30%, 20=0.91%, 50=98.78% 00:34:30.056 cpu : usr=98.68%, sys=0.91%, ctx=14, majf=0, minf=9 00:34:30.056 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:30.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.056 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.056 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.056 filename0: (groupid=0, jobs=1): err= 0: pid=2257308: Mon Dec 9 16:07:23 2024 00:34:30.056 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10002msec) 00:34:30.056 slat (usec): min=6, max=105, avg=46.89, stdev=23.01 00:34:30.056 clat (usec): min=20342, max=46540, avg=30214.86, stdev=1101.07 00:34:30.056 lat (usec): min=20361, max=46558, avg=30261.75, stdev=1101.95 00:34:30.056 clat percentiles (usec): 00:34:30.056 | 1.00th=[29754], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:34:30.056 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:34:30.056 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:34:30.056 | 99.00th=[31327], 99.50th=[32113], 99.90th=[46400], 99.95th=[46400], 00:34:30.056 | 99.99th=[46400] 00:34:30.056 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2081.68, stdev=71.93, samples=19 00:34:30.056 iops : min= 480, max= 544, avg=520.42, stdev=17.98, samples=19 00:34:30.056 lat (msec) : 50=100.00% 00:34:30.056 cpu : usr=98.52%, sys=1.06%, ctx=15, majf=0, minf=9 00:34:30.056 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:30.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.056 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.056 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.056 filename0: (groupid=0, jobs=1): err= 0: pid=2257309: Mon Dec 9 16:07:23 2024 00:34:30.056 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10004msec) 00:34:30.056 slat (nsec): min=7712, max=68288, avg=20579.78, stdev=7367.60 00:34:30.056 clat (usec): min=16463, max=55076, avg=30496.28, stdev=1585.98 00:34:30.056 lat (usec): min=16478, max=55094, avg=30516.86, stdev=1585.62 00:34:30.056 clat percentiles (usec): 00:34:30.056 | 1.00th=[29754], 5.00th=[30278], 10.00th=[30278], 20.00th=[30278], 00:34:30.056 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:34:30.056 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:30.056 | 99.00th=[31589], 99.50th=[31589], 99.90th=[54789], 99.95th=[55313], 00:34:30.056 | 99.99th=[55313] 00:34:30.056 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2081.68, stdev=71.93, samples=19 00:34:30.056 iops : min= 480, max= 544, avg=520.42, stdev=17.98, samples=19 00:34:30.056 lat (msec) : 20=0.31%, 50=99.39%, 100=0.31% 00:34:30.056 cpu : usr=98.73%, sys=0.86%, ctx=8, majf=0, minf=9 00:34:30.056 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:30.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.056 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.056 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.056 filename0: (groupid=0, jobs=1): err= 0: pid=2257310: Mon Dec 9 16:07:23 2024 00:34:30.056 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10008msec) 00:34:30.056 slat (nsec): min=7631, max=71618, avg=23819.95, stdev=11147.74 00:34:30.056 clat (usec): min=6997, max=31813, avg=30219.22, stdev=2068.24 00:34:30.056 lat (usec): min=7011, max=31826, avg=30243.04, stdev=2068.59 00:34:30.056 clat percentiles (usec): 00:34:30.056 | 1.00th=[17171], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:34:30.056 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:34:30.056 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:30.056 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:34:30.056 | 99.99th=[31851] 00:34:30.056 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2099.20, stdev=76.58, samples=20 00:34:30.056 iops : min= 512, max= 576, avg=524.80, stdev=19.14, samples=20 00:34:30.056 lat (msec) : 10=0.30%, 20=0.91%, 50=98.78% 00:34:30.056 cpu : usr=98.59%, sys=1.02%, ctx=38, majf=0, minf=9 00:34:30.056 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:30.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.056 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.056 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.056 filename0: (groupid=0, jobs=1): err= 0: pid=2257311: Mon Dec 9 16:07:23 2024 00:34:30.056 read: IOPS=522, BW=2090KiB/s (2140kB/s)(20.4MiB/10014msec) 00:34:30.056 slat (nsec): min=7644, max=88675, avg=29707.97, stdev=18478.03 00:34:30.056 clat (usec): min=23941, max=31772, avg=30318.00, stdev=525.67 00:34:30.056 lat (usec): min=23955, max=31787, avg=30347.70, stdev=528.04 00:34:30.056 clat percentiles (usec): 00:34:30.056 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:34:30.056 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:30.056 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:30.056 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31589], 99.95th=[31589], 00:34:30.056 | 99.99th=[31851] 00:34:30.056 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2086.40, stdev=60.18, samples=20 00:34:30.056 iops : min= 512, max= 544, avg=521.60, stdev=15.05, samples=20 00:34:30.056 lat (msec) : 50=100.00% 00:34:30.056 cpu : usr=98.52%, sys=1.09%, ctx=13, majf=0, minf=9 00:34:30.056 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:30.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.056 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.056 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.056 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.056 filename0: (groupid=0, jobs=1): err= 0: pid=2257312: Mon Dec 9 16:07:23 2024 00:34:30.056 read: IOPS=521, BW=2085KiB/s (2135kB/s)(20.4MiB/10003msec) 00:34:30.056 slat (nsec): min=7541, max=88610, avg=24446.12, stdev=14405.33 00:34:30.056 clat (usec): min=16068, max=70681, avg=30480.48, stdev=2665.93 00:34:30.056 lat (usec): min=16120, max=70719, avg=30504.93, stdev=2665.62 00:34:30.056 clat percentiles (usec): 00:34:30.056 | 1.00th=[28181], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:34:30.056 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:34:30.056 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:30.056 | 99.00th=[31589], 99.50th=[43254], 99.90th=[70779], 99.95th=[70779], 00:34:30.056 | 99.99th=[70779] 00:34:30.056 bw ( KiB/s): min= 1835, max= 2176, per=4.15%, avg=2081.00, stdev=84.38, samples=19 00:34:30.056 iops : min= 458, max= 544, avg=520.21, stdev=21.22, samples=19 00:34:30.056 lat (msec) : 20=0.84%, 50=98.85%, 100=0.31% 00:34:30.056 cpu : usr=98.47%, sys=1.14%, ctx=14, majf=0, minf=9 00:34:30.056 IO depths : 1=5.6%, 2=11.8%, 4=24.9%, 8=50.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:34:30.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.056 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.057 issued rwts: total=5214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.057 filename0: (groupid=0, jobs=1): err= 0: pid=2257313: Mon Dec 9 16:07:23 2024 00:34:30.057 read: IOPS=522, BW=2091KiB/s (2142kB/s)(20.4MiB/10003msec) 00:34:30.057 slat (usec): min=4, max=109, avg=42.05, stdev=24.32 00:34:30.057 clat (usec): min=19404, max=65417, avg=30166.14, stdev=1862.78 00:34:30.057 lat (usec): min=19412, max=65430, avg=30208.19, stdev=1864.86 00:34:30.057 clat percentiles (usec): 00:34:30.057 | 1.00th=[23200], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:34:30.057 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:34:30.057 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:34:30.057 | 99.00th=[35914], 99.50th=[38011], 99.90th=[47449], 99.95th=[47449], 00:34:30.057 | 99.99th=[65274] 00:34:30.057 bw ( KiB/s): min= 2036, max= 2176, per=4.16%, avg=2087.79, stdev=61.63, samples=19 00:34:30.057 iops : min= 509, max= 544, avg=521.95, stdev=15.41, samples=19 00:34:30.057 lat (msec) : 20=0.57%, 50=99.39%, 100=0.04% 00:34:30.057 cpu : usr=98.52%, sys=1.08%, ctx=13, majf=0, minf=9 00:34:30.057 IO depths : 1=5.9%, 2=11.8%, 4=24.0%, 8=51.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:30.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.057 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.057 issued rwts: total=5230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.057 filename0: (groupid=0, jobs=1): err= 0: pid=2257314: Mon Dec 9 16:07:23 2024 00:34:30.057 read: IOPS=523, BW=2095KiB/s (2146kB/s)(20.5MiB/10018msec) 00:34:30.057 slat (nsec): min=5535, max=45130, avg=19950.35, stdev=5982.41 00:34:30.057 clat (usec): min=13587, max=34176, avg=30366.89, stdev=1226.53 00:34:30.057 lat (usec): min=13602, max=34190, avg=30386.84, stdev=1227.03 00:34:30.057 clat percentiles (usec): 00:34:30.057 | 1.00th=[28181], 5.00th=[30278], 10.00th=[30278], 20.00th=[30278], 00:34:30.057 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:34:30.057 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:30.057 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:34:30.057 | 99.99th=[34341] 00:34:30.057 bw ( KiB/s): min= 2048, max= 2180, per=4.17%, avg=2093.00, stdev=62.92, samples=20 00:34:30.057 iops : min= 512, max= 545, avg=523.25, stdev=15.73, samples=20 00:34:30.057 lat (msec) : 20=0.57%, 50=99.43% 00:34:30.057 cpu : usr=98.59%, sys=1.02%, ctx=15, majf=0, minf=9 00:34:30.057 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:30.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.057 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.057 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.057 filename1: (groupid=0, jobs=1): err= 0: pid=2257315: Mon Dec 9 16:07:23 2024 00:34:30.057 read: IOPS=522, BW=2090KiB/s (2140kB/s)(20.4MiB/10012msec) 00:34:30.057 slat (usec): min=5, max=103, avg=47.51, stdev=22.39 00:34:30.057 clat (usec): min=11979, max=37300, avg=30183.70, stdev=915.88 00:34:30.057 lat (usec): min=11994, max=37315, avg=30231.21, stdev=917.57 00:34:30.057 clat percentiles (usec): 00:34:30.057 | 1.00th=[29754], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:34:30.057 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:34:30.057 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:34:30.057 | 99.00th=[31589], 99.50th=[31589], 99.90th=[35390], 99.95th=[36963], 00:34:30.057 | 99.99th=[37487] 00:34:30.057 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2086.40, stdev=60.18, samples=20 00:34:30.057 iops : min= 512, max= 544, avg=521.60, stdev=15.05, samples=20 00:34:30.057 lat (msec) : 20=0.31%, 50=99.69% 00:34:30.057 cpu : usr=98.61%, sys=0.98%, ctx=61, majf=0, minf=9 00:34:30.057 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:30.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.057 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.057 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.057 filename1: (groupid=0, jobs=1): err= 0: pid=2257316: Mon Dec 9 16:07:23 2024 00:34:30.057 read: IOPS=522, BW=2089KiB/s (2139kB/s)(20.4MiB/10019msec) 00:34:30.057 slat (nsec): min=7745, max=88276, avg=31144.19, stdev=18538.25 00:34:30.057 clat (usec): min=23850, max=31774, avg=30312.98, stdev=521.52 00:34:30.057 lat (usec): min=23863, max=31790, avg=30344.13, stdev=523.69 00:34:30.057 clat percentiles (usec): 00:34:30.057 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:34:30.057 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:30.057 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:30.057 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31589], 99.95th=[31851], 00:34:30.057 | 99.99th=[31851] 00:34:30.057 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2086.40, stdev=60.18, samples=20 00:34:30.057 iops : min= 512, max= 544, avg=521.60, stdev=15.05, samples=20 00:34:30.057 lat (msec) : 50=100.00% 00:34:30.057 cpu : usr=98.51%, sys=1.09%, ctx=13, majf=0, minf=9 00:34:30.057 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:30.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.057 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.057 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.057 filename1: (groupid=0, jobs=1): err= 0: pid=2257317: Mon Dec 9 16:07:23 2024 00:34:30.057 read: IOPS=525, BW=2102KiB/s (2153kB/s)(20.6MiB/10015msec) 00:34:30.057 slat (usec): min=7, max=103, avg=23.95, stdev=21.05 00:34:30.057 clat (usec): min=6946, max=32359, avg=30263.61, stdev=1993.91 00:34:30.057 lat (usec): min=6960, max=32373, avg=30287.56, stdev=1992.60 00:34:30.057 clat percentiles (usec): 00:34:30.057 | 1.00th=[17695], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:34:30.057 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:30.057 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:30.057 | 99.00th=[31589], 99.50th=[31589], 99.90th=[32375], 99.95th=[32375], 00:34:30.057 | 99.99th=[32375] 00:34:30.057 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2099.20, stdev=76.58, samples=20 00:34:30.057 iops : min= 512, max= 576, avg=524.80, stdev=19.14, samples=20 00:34:30.057 lat (msec) : 10=0.30%, 20=0.74%, 50=98.96% 00:34:30.057 cpu : usr=98.49%, sys=1.12%, ctx=13, majf=0, minf=9 00:34:30.057 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:30.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.057 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.057 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.057 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.058 filename1: (groupid=0, jobs=1): err= 0: pid=2257318: Mon Dec 9 16:07:23 2024 00:34:30.058 read: IOPS=522, BW=2088KiB/s (2139kB/s)(20.4MiB/10002msec) 00:34:30.058 slat (nsec): min=4344, max=88885, avg=29222.50, stdev=18451.85 00:34:30.058 clat (usec): min=19616, max=55593, avg=30341.63, stdev=1403.43 00:34:30.058 lat (usec): min=19624, max=55606, avg=30370.85, stdev=1403.74 00:34:30.058 clat percentiles (usec): 00:34:30.058 | 1.00th=[24249], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:34:30.058 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:30.058 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:30.058 | 99.00th=[31589], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:34:30.058 | 99.99th=[55837] 00:34:30.058 bw ( KiB/s): min= 1968, max= 2176, per=4.15%, avg=2084.21, stdev=65.07, samples=19 00:34:30.058 iops : min= 492, max= 544, avg=521.05, stdev=16.27, samples=19 00:34:30.058 lat (msec) : 20=0.11%, 50=99.85%, 100=0.04% 00:34:30.058 cpu : usr=98.45%, sys=1.16%, ctx=15, majf=0, minf=9 00:34:30.058 IO depths : 1=5.8%, 2=12.0%, 4=24.8%, 8=50.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:30.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.058 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.058 issued rwts: total=5222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.058 filename1: (groupid=0, jobs=1): err= 0: pid=2257319: Mon Dec 9 16:07:23 2024 00:34:30.058 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10016msec) 00:34:30.058 slat (usec): min=3, max=104, avg=37.54, stdev=25.50 00:34:30.058 clat (usec): min=13101, max=47173, avg=29638.12, stdev=3279.10 00:34:30.058 lat (usec): min=13135, max=47185, avg=29675.67, stdev=3284.22 00:34:30.058 clat percentiles (usec): 00:34:30.058 | 1.00th=[19268], 5.00th=[22152], 10.00th=[26608], 20.00th=[30016], 00:34:30.058 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:34:30.058 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[31065], 00:34:30.058 | 99.00th=[41157], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:34:30.058 | 99.99th=[46924] 00:34:30.058 bw ( KiB/s): min= 2048, max= 2544, per=4.25%, avg=2132.21, stdev=130.84, samples=19 00:34:30.058 iops : min= 512, max= 636, avg=533.05, stdev=32.71, samples=19 00:34:30.058 lat (msec) : 20=1.76%, 50=98.24% 00:34:30.058 cpu : usr=98.45%, sys=1.15%, ctx=13, majf=0, minf=9 00:34:30.058 IO depths : 1=0.3%, 2=5.4%, 4=21.5%, 8=60.3%, 16=12.4%, 32=0.0%, >=64=0.0% 00:34:30.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.058 complete : 0=0.0%, 4=93.4%, 8=1.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.058 issued rwts: total=5350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.058 filename1: (groupid=0, jobs=1): err= 0: pid=2257320: Mon Dec 9 16:07:23 2024 00:34:30.058 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10003msec) 00:34:30.058 slat (usec): min=6, max=103, avg=46.82, stdev=22.68 00:34:30.058 clat (usec): min=20413, max=47339, avg=30229.34, stdev=1135.06 00:34:30.058 lat (usec): min=20435, max=47359, avg=30276.16, stdev=1135.31 00:34:30.058 clat percentiles (usec): 00:34:30.058 | 1.00th=[29754], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:34:30.058 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:34:30.058 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:34:30.058 | 99.00th=[31327], 99.50th=[32113], 99.90th=[47449], 99.95th=[47449], 00:34:30.058 | 99.99th=[47449] 00:34:30.058 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2081.84, stdev=71.56, samples=19 00:34:30.058 iops : min= 480, max= 544, avg=520.42, stdev=17.98, samples=19 00:34:30.058 lat (msec) : 50=100.00% 00:34:30.058 cpu : usr=98.47%, sys=1.13%, ctx=14, majf=0, minf=9 00:34:30.058 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:30.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.058 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.058 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.058 filename1: (groupid=0, jobs=1): err= 0: pid=2257321: Mon Dec 9 16:07:23 2024 00:34:30.058 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10004msec) 00:34:30.058 slat (usec): min=5, max=111, avg=45.56, stdev=23.31 00:34:30.058 clat (usec): min=20399, max=48583, avg=30226.90, stdev=1189.70 00:34:30.058 lat (usec): min=20426, max=48598, avg=30272.46, stdev=1190.49 00:34:30.058 clat percentiles (usec): 00:34:30.058 | 1.00th=[29754], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:34:30.058 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:34:30.058 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:34:30.058 | 99.00th=[31327], 99.50th=[32113], 99.90th=[48497], 99.95th=[48497], 00:34:30.058 | 99.99th=[48497] 00:34:30.058 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2081.68, stdev=71.93, samples=19 00:34:30.058 iops : min= 480, max= 544, avg=520.42, stdev=17.98, samples=19 00:34:30.058 lat (msec) : 50=100.00% 00:34:30.058 cpu : usr=98.42%, sys=1.18%, ctx=14, majf=0, minf=9 00:34:30.058 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:30.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.058 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.058 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.058 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.058 filename1: (groupid=0, jobs=1): err= 0: pid=2257322: Mon Dec 9 16:07:23 2024 00:34:30.058 read: IOPS=522, BW=2090KiB/s (2140kB/s)(20.4MiB/10012msec) 00:34:30.058 slat (usec): min=4, max=122, avg=34.68, stdev=13.35 00:34:30.058 clat (usec): min=17434, max=32337, avg=30329.58, stdev=787.27 00:34:30.058 lat (usec): min=17442, max=32351, avg=30364.27, stdev=786.95 00:34:30.058 clat percentiles (usec): 00:34:30.058 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:34:30.058 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:34:30.058 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:34:30.058 | 99.00th=[31589], 99.50th=[31589], 99.90th=[32113], 99.95th=[32375], 00:34:30.058 | 99.99th=[32375] 00:34:30.058 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2086.60, stdev=60.05, samples=20 00:34:30.058 iops : min= 512, max= 544, avg=521.65, stdev=15.01, samples=20 00:34:30.058 lat (msec) : 20=0.31%, 50=99.69% 00:34:30.058 cpu : usr=98.34%, sys=1.17%, ctx=49, majf=0, minf=9 00:34:30.058 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:30.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.058 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.058 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.059 filename2: (groupid=0, jobs=1): err= 0: pid=2257323: Mon Dec 9 16:07:23 2024 00:34:30.059 read: IOPS=524, BW=2096KiB/s (2147kB/s)(20.5MiB/10014msec) 00:34:30.059 slat (usec): min=5, max=106, avg=25.51, stdev=22.99 00:34:30.059 clat (usec): min=12045, max=32226, avg=30335.70, stdev=1370.34 00:34:30.059 lat (usec): min=12053, max=32239, avg=30361.21, stdev=1369.02 00:34:30.059 clat percentiles (usec): 00:34:30.059 | 1.00th=[27132], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:34:30.059 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:30.059 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:30.059 | 99.00th=[31589], 99.50th=[31589], 99.90th=[32113], 99.95th=[32113], 00:34:30.059 | 99.99th=[32113] 00:34:30.059 bw ( KiB/s): min= 2048, max= 2180, per=4.17%, avg=2093.00, stdev=62.92, samples=20 00:34:30.059 iops : min= 512, max= 545, avg=523.25, stdev=15.73, samples=20 00:34:30.059 lat (msec) : 20=0.61%, 50=99.39% 00:34:30.059 cpu : usr=98.71%, sys=0.80%, ctx=14, majf=0, minf=9 00:34:30.059 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:30.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.059 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.059 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.059 filename2: (groupid=0, jobs=1): err= 0: pid=2257324: Mon Dec 9 16:07:23 2024 00:34:30.059 read: IOPS=524, BW=2096KiB/s (2146kB/s)(20.5MiB/10007msec) 00:34:30.059 slat (nsec): min=4866, max=97606, avg=16348.73, stdev=10297.08 00:34:30.059 clat (usec): min=7076, max=48750, avg=30466.89, stdev=2098.84 00:34:30.059 lat (usec): min=7092, max=48762, avg=30483.24, stdev=2098.41 00:34:30.059 clat percentiles (usec): 00:34:30.059 | 1.00th=[19530], 5.00th=[30278], 10.00th=[30278], 20.00th=[30540], 00:34:30.059 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:30.059 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:30.059 | 99.00th=[34341], 99.50th=[38536], 99.90th=[48497], 99.95th=[48497], 00:34:30.059 | 99.99th=[48497] 00:34:30.059 bw ( KiB/s): min= 1968, max= 2144, per=4.17%, avg=2093.60, stdev=42.89, samples=20 00:34:30.059 iops : min= 492, max= 536, avg=523.40, stdev=10.72, samples=20 00:34:30.059 lat (msec) : 10=0.08%, 20=1.07%, 50=98.86% 00:34:30.059 cpu : usr=98.38%, sys=1.22%, ctx=13, majf=0, minf=9 00:34:30.059 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=80.7%, 16=18.4%, 32=0.0%, >=64=0.0% 00:34:30.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.059 complete : 0=0.0%, 4=89.5%, 8=10.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.059 issued rwts: total=5244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.059 filename2: (groupid=0, jobs=1): err= 0: pid=2257325: Mon Dec 9 16:07:23 2024 00:34:30.059 read: IOPS=523, BW=2095KiB/s (2146kB/s)(20.5MiB/10019msec) 00:34:30.059 slat (nsec): min=4747, max=41255, avg=18720.15, stdev=5750.84 00:34:30.059 clat (usec): min=13638, max=31887, avg=30381.60, stdev=1223.29 00:34:30.059 lat (usec): min=13653, max=31900, avg=30400.32, stdev=1223.72 00:34:30.059 clat percentiles (usec): 00:34:30.059 | 1.00th=[28443], 5.00th=[30278], 10.00th=[30278], 20.00th=[30278], 00:34:30.059 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:34:30.059 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:30.059 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:34:30.059 | 99.99th=[31851] 00:34:30.059 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2092.80, stdev=62.64, samples=20 00:34:30.059 iops : min= 512, max= 544, avg=523.20, stdev=15.66, samples=20 00:34:30.059 lat (msec) : 20=0.61%, 50=99.39% 00:34:30.059 cpu : usr=98.46%, sys=1.14%, ctx=14, majf=0, minf=9 00:34:30.059 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:30.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.059 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.059 issued rwts: total=5248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.059 filename2: (groupid=0, jobs=1): err= 0: pid=2257326: Mon Dec 9 16:07:23 2024 00:34:30.059 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10003msec) 00:34:30.059 slat (usec): min=6, max=103, avg=45.61, stdev=23.03 00:34:30.059 clat (usec): min=20338, max=47157, avg=30231.40, stdev=1126.65 00:34:30.059 lat (usec): min=20360, max=47174, avg=30277.02, stdev=1127.22 00:34:30.059 clat percentiles (usec): 00:34:30.059 | 1.00th=[29754], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:34:30.059 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:34:30.059 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:34:30.059 | 99.00th=[31327], 99.50th=[32113], 99.90th=[46924], 99.95th=[46924], 00:34:30.059 | 99.99th=[46924] 00:34:30.059 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2081.84, stdev=71.56, samples=19 00:34:30.059 iops : min= 480, max= 544, avg=520.42, stdev=17.98, samples=19 00:34:30.059 lat (msec) : 50=100.00% 00:34:30.059 cpu : usr=98.39%, sys=1.21%, ctx=15, majf=0, minf=9 00:34:30.059 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:30.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.059 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.059 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.059 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.059 filename2: (groupid=0, jobs=1): err= 0: pid=2257327: Mon Dec 9 16:07:23 2024 00:34:30.059 read: IOPS=522, BW=2089KiB/s (2140kB/s)(20.4MiB/10016msec) 00:34:30.059 slat (usec): min=5, max=107, avg=45.73, stdev=22.36 00:34:30.059 clat (usec): min=20273, max=46504, avg=30196.99, stdev=768.20 00:34:30.059 lat (usec): min=20301, max=46521, avg=30242.73, stdev=770.26 00:34:30.059 clat percentiles (usec): 00:34:30.059 | 1.00th=[29754], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:34:30.059 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:34:30.059 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:34:30.059 | 99.00th=[31327], 99.50th=[32113], 99.90th=[32637], 99.95th=[32637], 00:34:30.059 | 99.99th=[46400] 00:34:30.059 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2081.68, stdev=57.91, samples=19 00:34:30.059 iops : min= 512, max= 544, avg=520.42, stdev=14.48, samples=19 00:34:30.059 lat (msec) : 50=100.00% 00:34:30.060 cpu : usr=98.68%, sys=0.92%, ctx=14, majf=0, minf=9 00:34:30.060 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:30.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.060 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.060 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.060 filename2: (groupid=0, jobs=1): err= 0: pid=2257328: Mon Dec 9 16:07:23 2024 00:34:30.060 read: IOPS=526, BW=2108KiB/s (2158kB/s)(20.6MiB/10021msec) 00:34:30.060 slat (nsec): min=7582, max=44045, avg=13886.67, stdev=6134.70 00:34:30.060 clat (usec): min=6273, max=31901, avg=30249.89, stdev=2365.39 00:34:30.060 lat (usec): min=6286, max=31916, avg=30263.78, stdev=2365.20 00:34:30.060 clat percentiles (usec): 00:34:30.060 | 1.00th=[13698], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:34:30.060 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:34:30.060 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:30.060 | 99.00th=[31589], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:34:30.060 | 99.99th=[31851] 00:34:30.060 bw ( KiB/s): min= 2048, max= 2432, per=4.19%, avg=2105.60, stdev=97.17, samples=20 00:34:30.060 iops : min= 512, max= 608, avg=526.40, stdev=24.29, samples=20 00:34:30.060 lat (msec) : 10=0.64%, 20=0.87%, 50=98.48% 00:34:30.060 cpu : usr=98.40%, sys=1.19%, ctx=37, majf=0, minf=9 00:34:30.060 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:30.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.060 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.060 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.060 filename2: (groupid=0, jobs=1): err= 0: pid=2257329: Mon Dec 9 16:07:23 2024 00:34:30.060 read: IOPS=521, BW=2085KiB/s (2135kB/s)(20.4MiB/10005msec) 00:34:30.060 slat (nsec): min=7727, max=88697, avg=31611.32, stdev=18753.90 00:34:30.060 clat (usec): min=7046, max=71419, avg=30364.18, stdev=2651.12 00:34:30.060 lat (usec): min=7054, max=71437, avg=30395.79, stdev=2650.99 00:34:30.060 clat percentiles (usec): 00:34:30.060 | 1.00th=[28967], 5.00th=[30016], 10.00th=[30016], 20.00th=[30016], 00:34:30.060 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:34:30.060 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:34:30.060 | 99.00th=[31327], 99.50th=[31589], 99.90th=[71828], 99.95th=[71828], 00:34:30.060 | 99.99th=[71828] 00:34:30.060 bw ( KiB/s): min= 1792, max= 2176, per=4.13%, avg=2074.95, stdev=91.30, samples=19 00:34:30.060 iops : min= 448, max= 544, avg=518.74, stdev=22.83, samples=19 00:34:30.060 lat (msec) : 10=0.31%, 50=99.39%, 100=0.31% 00:34:30.060 cpu : usr=98.48%, sys=1.08%, ctx=15, majf=0, minf=9 00:34:30.060 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:30.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.060 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.060 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.060 filename2: (groupid=0, jobs=1): err= 0: pid=2257330: Mon Dec 9 16:07:23 2024 00:34:30.060 read: IOPS=521, BW=2086KiB/s (2136kB/s)(20.4MiB/10004msec) 00:34:30.060 slat (usec): min=4, max=123, avg=48.48, stdev=22.51 00:34:30.060 clat (usec): min=20393, max=49501, avg=30201.93, stdev=1234.14 00:34:30.060 lat (usec): min=20409, max=49514, avg=30250.41, stdev=1235.11 00:34:30.060 clat percentiles (usec): 00:34:30.060 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 00:34:30.060 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:34:30.060 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:34:30.060 | 99.00th=[31327], 99.50th=[31589], 99.90th=[49546], 99.95th=[49546], 00:34:30.060 | 99.99th=[49546] 00:34:30.060 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2081.68, stdev=71.93, samples=19 00:34:30.060 iops : min= 480, max= 544, avg=520.42, stdev=17.98, samples=19 00:34:30.060 lat (msec) : 50=100.00% 00:34:30.060 cpu : usr=98.47%, sys=1.13%, ctx=10, majf=0, minf=9 00:34:30.060 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:30.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.060 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.060 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:30.060 00:34:30.060 Run status group 0 (all jobs): 00:34:30.060 READ: bw=49.0MiB/s (51.4MB/s), 2085KiB/s-2137KiB/s (2135kB/s-2188kB/s), io=491MiB (515MB), run=10002-10021msec 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.060 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.061 bdev_null0 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.061 [2024-12-09 16:07:23.996978] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:30.061 16:07:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.061 bdev_null1 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:30.061 { 00:34:30.061 "params": { 00:34:30.061 "name": "Nvme$subsystem", 00:34:30.061 "trtype": "$TEST_TRANSPORT", 00:34:30.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:30.061 "adrfam": "ipv4", 00:34:30.061 "trsvcid": "$NVMF_PORT", 00:34:30.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:30.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:30.061 "hdgst": ${hdgst:-false}, 00:34:30.061 "ddgst": ${ddgst:-false} 00:34:30.061 }, 00:34:30.061 "method": "bdev_nvme_attach_controller" 00:34:30.061 } 00:34:30.061 EOF 00:34:30.061 )") 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:30.061 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:30.062 { 00:34:30.062 "params": { 00:34:30.062 "name": "Nvme$subsystem", 00:34:30.062 "trtype": "$TEST_TRANSPORT", 00:34:30.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:30.062 "adrfam": "ipv4", 00:34:30.062 "trsvcid": "$NVMF_PORT", 00:34:30.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:30.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:30.062 "hdgst": ${hdgst:-false}, 00:34:30.062 "ddgst": ${ddgst:-false} 00:34:30.062 }, 00:34:30.062 "method": "bdev_nvme_attach_controller" 00:34:30.062 } 00:34:30.062 EOF 00:34:30.062 )") 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:30.062 "params": { 00:34:30.062 "name": "Nvme0", 00:34:30.062 "trtype": "tcp", 00:34:30.062 "traddr": "10.0.0.2", 00:34:30.062 "adrfam": "ipv4", 00:34:30.062 "trsvcid": "4420", 00:34:30.062 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:30.062 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:30.062 "hdgst": false, 00:34:30.062 "ddgst": false 00:34:30.062 }, 00:34:30.062 "method": "bdev_nvme_attach_controller" 00:34:30.062 },{ 00:34:30.062 "params": { 00:34:30.062 "name": "Nvme1", 00:34:30.062 "trtype": "tcp", 00:34:30.062 "traddr": "10.0.0.2", 00:34:30.062 "adrfam": "ipv4", 00:34:30.062 "trsvcid": "4420", 00:34:30.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:30.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:30.062 "hdgst": false, 00:34:30.062 "ddgst": false 00:34:30.062 }, 00:34:30.062 "method": "bdev_nvme_attach_controller" 00:34:30.062 }' 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:30.062 16:07:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:30.062 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:30.062 ... 00:34:30.062 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:30.062 ... 00:34:30.062 fio-3.35 00:34:30.062 Starting 4 threads 00:34:35.326 00:34:35.326 filename0: (groupid=0, jobs=1): err= 0: pid=2259246: Mon Dec 9 16:07:30 2024 00:34:35.326 read: IOPS=2664, BW=20.8MiB/s (21.8MB/s)(104MiB/5002msec) 00:34:35.326 slat (nsec): min=6277, max=84158, avg=16720.85, stdev=8706.39 00:34:35.326 clat (usec): min=688, max=5351, avg=2951.45, stdev=427.20 00:34:35.326 lat (usec): min=705, max=5378, avg=2968.17, stdev=427.66 00:34:35.326 clat percentiles (usec): 00:34:35.326 | 1.00th=[ 1663], 5.00th=[ 2245], 10.00th=[ 2442], 20.00th=[ 2671], 00:34:35.326 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3064], 00:34:35.326 | 70.00th=[ 3130], 80.00th=[ 3228], 90.00th=[ 3326], 95.00th=[ 3523], 00:34:35.326 | 99.00th=[ 4146], 99.50th=[ 4490], 99.90th=[ 4883], 99.95th=[ 4948], 00:34:35.326 | 99.99th=[ 5342] 00:34:35.326 bw ( KiB/s): min=20928, max=22608, per=25.85%, avg=21376.56, stdev=550.90, samples=9 00:34:35.326 iops : min= 2616, max= 2826, avg=2672.00, stdev=68.87, samples=9 00:34:35.326 lat (usec) : 750=0.03%, 1000=0.11% 00:34:35.326 lat (msec) : 2=2.17%, 4=96.29%, 10=1.40% 00:34:35.326 cpu : usr=97.08%, sys=2.30%, ctx=126, majf=0, minf=9 00:34:35.326 IO depths : 1=0.5%, 2=8.9%, 4=62.2%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.326 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.326 issued rwts: total=13330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.326 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:35.326 filename0: (groupid=0, jobs=1): err= 0: pid=2259247: Mon Dec 9 16:07:30 2024 00:34:35.326 read: IOPS=2547, BW=19.9MiB/s (20.9MB/s)(99.6MiB/5002msec) 00:34:35.326 slat (nsec): min=5976, max=74134, avg=16080.22, stdev=11633.27 00:34:35.326 clat (usec): min=782, max=5653, avg=3089.28, stdev=457.37 00:34:35.326 lat (usec): min=792, max=5676, avg=3105.36, stdev=457.52 00:34:35.326 clat percentiles (usec): 00:34:35.326 | 1.00th=[ 1958], 5.00th=[ 2409], 10.00th=[ 2606], 20.00th=[ 2802], 00:34:35.326 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3130], 00:34:35.326 | 70.00th=[ 3195], 80.00th=[ 3294], 90.00th=[ 3589], 95.00th=[ 3884], 00:34:35.326 | 99.00th=[ 4686], 99.50th=[ 4948], 99.90th=[ 5342], 99.95th=[ 5538], 00:34:35.326 | 99.99th=[ 5669] 00:34:35.326 bw ( KiB/s): min=20144, max=20896, per=24.67%, avg=20396.44, stdev=237.83, samples=9 00:34:35.326 iops : min= 2518, max= 2612, avg=2549.56, stdev=29.73, samples=9 00:34:35.326 lat (usec) : 1000=0.02% 00:34:35.326 lat (msec) : 2=1.15%, 4=94.80%, 10=4.03% 00:34:35.326 cpu : usr=96.86%, sys=2.80%, ctx=8, majf=0, minf=9 00:34:35.326 IO depths : 1=0.3%, 2=7.0%, 4=64.4%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.326 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.326 issued rwts: total=12743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.326 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:35.326 filename1: (groupid=0, jobs=1): err= 0: pid=2259248: Mon Dec 9 16:07:30 2024 00:34:35.326 read: IOPS=2576, BW=20.1MiB/s (21.1MB/s)(101MiB/5001msec) 00:34:35.326 slat (nsec): min=5986, max=75132, avg=15963.56, stdev=11648.27 00:34:35.326 clat (usec): min=536, max=6020, avg=3052.27, stdev=439.95 00:34:35.326 lat (usec): min=559, max=6041, avg=3068.24, stdev=440.31 00:34:35.326 clat percentiles (usec): 00:34:35.326 | 1.00th=[ 1844], 5.00th=[ 2376], 10.00th=[ 2573], 20.00th=[ 2802], 00:34:35.326 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3097], 00:34:35.326 | 70.00th=[ 3163], 80.00th=[ 3261], 90.00th=[ 3490], 95.00th=[ 3752], 00:34:35.326 | 99.00th=[ 4424], 99.50th=[ 4883], 99.90th=[ 5407], 99.95th=[ 5669], 00:34:35.326 | 99.99th=[ 5932] 00:34:35.326 bw ( KiB/s): min=20104, max=20960, per=24.87%, avg=20559.11, stdev=283.53, samples=9 00:34:35.326 iops : min= 2513, max= 2620, avg=2569.89, stdev=35.44, samples=9 00:34:35.326 lat (usec) : 750=0.02%, 1000=0.12% 00:34:35.326 lat (msec) : 2=1.27%, 4=95.74%, 10=2.85% 00:34:35.326 cpu : usr=97.40%, sys=2.24%, ctx=8, majf=0, minf=9 00:34:35.326 IO depths : 1=0.5%, 2=8.9%, 4=62.6%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.326 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.326 issued rwts: total=12887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.326 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:35.326 filename1: (groupid=0, jobs=1): err= 0: pid=2259249: Mon Dec 9 16:07:30 2024 00:34:35.326 read: IOPS=2545, BW=19.9MiB/s (20.9MB/s)(99.5MiB/5002msec) 00:34:35.326 slat (nsec): min=5976, max=75208, avg=16015.59, stdev=11808.59 00:34:35.326 clat (usec): min=673, max=5666, avg=3089.83, stdev=476.20 00:34:35.326 lat (usec): min=685, max=5685, avg=3105.85, stdev=476.22 00:34:35.326 clat percentiles (usec): 00:34:35.326 | 1.00th=[ 1876], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2835], 00:34:35.326 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3130], 00:34:35.326 | 70.00th=[ 3195], 80.00th=[ 3294], 90.00th=[ 3589], 95.00th=[ 3916], 00:34:35.326 | 99.00th=[ 4817], 99.50th=[ 5014], 99.90th=[ 5342], 99.95th=[ 5473], 00:34:35.326 | 99.99th=[ 5669] 00:34:35.326 bw ( KiB/s): min=19664, max=20848, per=24.65%, avg=20376.00, stdev=387.11, samples=9 00:34:35.326 iops : min= 2458, max= 2606, avg=2547.00, stdev=48.39, samples=9 00:34:35.326 lat (usec) : 750=0.01%, 1000=0.09% 00:34:35.326 lat (msec) : 2=1.25%, 4=94.26%, 10=4.40% 00:34:35.326 cpu : usr=97.20%, sys=2.46%, ctx=8, majf=0, minf=9 00:34:35.326 IO depths : 1=0.1%, 2=8.8%, 4=62.8%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.326 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.326 issued rwts: total=12734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.326 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:35.326 00:34:35.326 Run status group 0 (all jobs): 00:34:35.326 READ: bw=80.7MiB/s (84.7MB/s), 19.9MiB/s-20.8MiB/s (20.9MB/s-21.8MB/s), io=404MiB (423MB), run=5001-5002msec 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.326 00:34:35.326 real 0m24.446s 00:34:35.326 user 4m52.207s 00:34:35.326 sys 0m4.859s 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:35.326 16:07:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:35.326 ************************************ 00:34:35.326 END TEST fio_dif_rand_params 00:34:35.326 ************************************ 00:34:35.326 16:07:30 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:35.326 16:07:30 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:35.326 16:07:30 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:35.326 16:07:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:35.326 ************************************ 00:34:35.326 START TEST fio_dif_digest 00:34:35.326 ************************************ 00:34:35.326 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:35.326 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:35.326 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:35.326 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:35.326 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:35.326 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:35.326 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:35.326 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:35.326 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:35.326 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:35.327 bdev_null0 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:35.327 [2024-12-09 16:07:30.474391] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:35.327 { 00:34:35.327 "params": { 00:34:35.327 "name": "Nvme$subsystem", 00:34:35.327 "trtype": "$TEST_TRANSPORT", 00:34:35.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:35.327 "adrfam": "ipv4", 00:34:35.327 "trsvcid": "$NVMF_PORT", 00:34:35.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:35.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:35.327 "hdgst": ${hdgst:-false}, 00:34:35.327 "ddgst": ${ddgst:-false} 00:34:35.327 }, 00:34:35.327 "method": "bdev_nvme_attach_controller" 00:34:35.327 } 00:34:35.327 EOF 00:34:35.327 )") 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:35.327 "params": { 00:34:35.327 "name": "Nvme0", 00:34:35.327 "trtype": "tcp", 00:34:35.327 "traddr": "10.0.0.2", 00:34:35.327 "adrfam": "ipv4", 00:34:35.327 "trsvcid": "4420", 00:34:35.327 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:35.327 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:35.327 "hdgst": true, 00:34:35.327 "ddgst": true 00:34:35.327 }, 00:34:35.327 "method": "bdev_nvme_attach_controller" 00:34:35.327 }' 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:35.327 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:35.601 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:35.601 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:35.601 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:35.601 16:07:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.859 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:35.859 ... 00:34:35.859 fio-3.35 00:34:35.859 Starting 3 threads 00:34:48.065 00:34:48.065 filename0: (groupid=0, jobs=1): err= 0: pid=2260298: Mon Dec 9 16:07:41 2024 00:34:48.065 read: IOPS=286, BW=35.8MiB/s (37.5MB/s)(359MiB/10044msec) 00:34:48.065 slat (nsec): min=6309, max=34067, avg=11556.80, stdev=1887.68 00:34:48.065 clat (usec): min=4791, max=51005, avg=10456.66, stdev=1825.37 00:34:48.065 lat (usec): min=4801, max=51016, avg=10468.22, stdev=1825.30 00:34:48.065 clat percentiles (usec): 00:34:48.065 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:34:48.065 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:34:48.065 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:34:48.065 | 99.00th=[12387], 99.50th=[12780], 99.90th=[50070], 99.95th=[50594], 00:34:48.065 | 99.99th=[51119] 00:34:48.065 bw ( KiB/s): min=34304, max=38400, per=35.06%, avg=36761.60, stdev=903.76, samples=20 00:34:48.065 iops : min= 268, max= 300, avg=287.20, stdev= 7.06, samples=20 00:34:48.065 lat (msec) : 10=29.05%, 20=70.77%, 50=0.10%, 100=0.07% 00:34:48.065 cpu : usr=94.61%, sys=5.09%, ctx=29, majf=0, minf=63 00:34:48.065 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.065 issued rwts: total=2874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.065 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:48.065 filename0: (groupid=0, jobs=1): err= 0: pid=2260299: Mon Dec 9 16:07:41 2024 00:34:48.065 read: IOPS=270, BW=33.8MiB/s (35.4MB/s)(339MiB/10044msec) 00:34:48.065 slat (nsec): min=6360, max=42145, avg=11486.21, stdev=1991.95 00:34:48.065 clat (usec): min=5949, max=48930, avg=11081.35, stdev=1299.50 00:34:48.065 lat (usec): min=5961, max=48939, avg=11092.84, stdev=1299.48 00:34:48.065 clat percentiles (usec): 00:34:48.065 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:34:48.065 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:34:48.065 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12387], 00:34:48.065 | 99.00th=[12911], 99.50th=[13435], 99.90th=[13960], 99.95th=[46400], 00:34:48.065 | 99.99th=[49021] 00:34:48.065 bw ( KiB/s): min=34048, max=36352, per=33.08%, avg=34688.00, stdev=547.80, samples=20 00:34:48.065 iops : min= 266, max= 284, avg=271.00, stdev= 4.28, samples=20 00:34:48.065 lat (msec) : 10=8.22%, 20=91.70%, 50=0.07% 00:34:48.065 cpu : usr=94.51%, sys=5.19%, ctx=20, majf=0, minf=73 00:34:48.065 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.065 issued rwts: total=2712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.065 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:48.065 filename0: (groupid=0, jobs=1): err= 0: pid=2260300: Mon Dec 9 16:07:41 2024 00:34:48.065 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(330MiB/10044msec) 00:34:48.065 slat (nsec): min=6375, max=72513, avg=11558.44, stdev=2079.62 00:34:48.065 clat (usec): min=7353, max=49045, avg=11376.40, stdev=1289.60 00:34:48.065 lat (usec): min=7365, max=49053, avg=11387.96, stdev=1289.61 00:34:48.065 clat percentiles (usec): 00:34:48.066 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:34:48.066 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:34:48.066 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12780], 00:34:48.066 | 99.00th=[13435], 99.50th=[13829], 99.90th=[15401], 99.95th=[44827], 00:34:48.066 | 99.99th=[49021] 00:34:48.066 bw ( KiB/s): min=32512, max=34560, per=32.23%, avg=33792.00, stdev=498.34, samples=20 00:34:48.066 iops : min= 254, max= 270, avg=264.00, stdev= 3.89, samples=20 00:34:48.066 lat (msec) : 10=4.13%, 20=95.80%, 50=0.08% 00:34:48.066 cpu : usr=94.90%, sys=4.80%, ctx=16, majf=0, minf=34 00:34:48.066 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.066 issued rwts: total=2642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.066 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:48.066 00:34:48.066 Run status group 0 (all jobs): 00:34:48.066 READ: bw=102MiB/s (107MB/s), 32.9MiB/s-35.8MiB/s (34.5MB/s-37.5MB/s), io=1029MiB (1078MB), run=10044-10044msec 00:34:48.066 16:07:41 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:48.066 16:07:41 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:48.066 16:07:41 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:48.066 16:07:41 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:48.066 16:07:41 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:48.066 16:07:41 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:48.066 16:07:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.066 16:07:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.066 16:07:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.066 16:07:41 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:48.066 16:07:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.066 16:07:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.066 16:07:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.066 00:34:48.066 real 0m11.341s 00:34:48.066 user 0m35.556s 00:34:48.066 sys 0m1.869s 00:34:48.066 16:07:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:48.066 16:07:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.066 ************************************ 00:34:48.066 END TEST fio_dif_digest 00:34:48.066 ************************************ 00:34:48.066 16:07:41 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:48.066 16:07:41 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:48.066 16:07:41 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:48.066 16:07:41 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:48.066 16:07:41 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:48.066 16:07:41 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:48.066 16:07:41 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:48.066 16:07:41 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:48.066 rmmod nvme_tcp 00:34:48.066 rmmod nvme_fabrics 00:34:48.066 rmmod nvme_keyring 00:34:48.066 16:07:41 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:48.066 16:07:41 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:48.066 16:07:41 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:48.066 16:07:41 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2251795 ']' 00:34:48.066 16:07:41 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2251795 00:34:48.066 16:07:41 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2251795 ']' 00:34:48.066 16:07:41 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2251795 00:34:48.066 16:07:41 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:48.066 16:07:41 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:48.066 16:07:41 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2251795 00:34:48.066 16:07:41 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:48.066 16:07:41 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:48.066 16:07:41 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2251795' 00:34:48.066 killing process with pid 2251795 00:34:48.066 16:07:41 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2251795 00:34:48.066 16:07:41 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2251795 00:34:48.066 16:07:42 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:48.066 16:07:42 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:49.444 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:34:49.703 Waiting for block devices as requested 00:34:49.962 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:49.962 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:49.962 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:50.220 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:50.220 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:50.220 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:50.478 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:50.478 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:50.478 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:50.737 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:50.737 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:50.737 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:50.737 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:50.996 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:50.996 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:50.996 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:51.256 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:51.256 16:07:46 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:51.256 16:07:46 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:51.256 16:07:46 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:51.256 16:07:46 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:51.256 16:07:46 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:51.256 16:07:46 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:51.256 16:07:46 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:51.256 16:07:46 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:51.256 16:07:46 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.256 16:07:46 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:51.256 16:07:46 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.792 16:07:48 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:53.792 00:34:53.792 real 1m14.811s 00:34:53.792 user 7m11.076s 00:34:53.792 sys 0m20.670s 00:34:53.792 16:07:48 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:53.792 16:07:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:53.792 ************************************ 00:34:53.792 END TEST nvmf_dif 00:34:53.792 ************************************ 00:34:53.792 16:07:48 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:53.792 16:07:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:53.792 16:07:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:53.792 16:07:48 -- common/autotest_common.sh@10 -- # set +x 00:34:53.792 ************************************ 00:34:53.792 START TEST nvmf_abort_qd_sizes 00:34:53.792 ************************************ 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:53.792 * Looking for test storage... 00:34:53.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:53.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.792 --rc genhtml_branch_coverage=1 00:34:53.792 --rc genhtml_function_coverage=1 00:34:53.792 --rc genhtml_legend=1 00:34:53.792 --rc geninfo_all_blocks=1 00:34:53.792 --rc geninfo_unexecuted_blocks=1 00:34:53.792 00:34:53.792 ' 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:53.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.792 --rc genhtml_branch_coverage=1 00:34:53.792 --rc genhtml_function_coverage=1 00:34:53.792 --rc genhtml_legend=1 00:34:53.792 --rc geninfo_all_blocks=1 00:34:53.792 --rc geninfo_unexecuted_blocks=1 00:34:53.792 00:34:53.792 ' 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:53.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.792 --rc genhtml_branch_coverage=1 00:34:53.792 --rc genhtml_function_coverage=1 00:34:53.792 --rc genhtml_legend=1 00:34:53.792 --rc geninfo_all_blocks=1 00:34:53.792 --rc geninfo_unexecuted_blocks=1 00:34:53.792 00:34:53.792 ' 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:53.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.792 --rc genhtml_branch_coverage=1 00:34:53.792 --rc genhtml_function_coverage=1 00:34:53.792 --rc genhtml_legend=1 00:34:53.792 --rc geninfo_all_blocks=1 00:34:53.792 --rc geninfo_unexecuted_blocks=1 00:34:53.792 00:34:53.792 ' 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.792 16:07:48 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:53.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:53.793 16:07:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:59.066 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:59.066 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.066 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:59.067 Found net devices under 0000:af:00.0: cvl_0_0 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:59.067 Found net devices under 0000:af:00.1: cvl_0_1 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:59.067 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:59.325 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:59.325 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:59.325 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:59.325 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:59.326 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:59.326 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:59.326 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:59.326 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:59.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:59.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:34:59.326 00:34:59.326 --- 10.0.0.2 ping statistics --- 00:34:59.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.326 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:34:59.326 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:59.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:59.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:34:59.326 00:34:59.326 --- 10.0.0.1 ping statistics --- 00:34:59.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.326 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:34:59.326 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:59.326 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:59.326 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:59.326 16:07:54 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:01.859 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:35:02.426 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:02.426 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:02.426 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:02.426 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:02.426 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:02.426 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:02.426 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:02.426 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:02.426 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:02.426 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:02.426 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:02.426 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:02.426 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:02.426 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:02.426 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:02.426 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:03.363 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:03.363 16:07:58 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:03.363 16:07:58 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:03.364 16:07:58 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:03.364 16:07:58 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:03.364 16:07:58 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:03.364 16:07:58 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:03.364 16:07:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:03.364 16:07:58 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:03.364 16:07:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:03.364 16:07:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:03.364 16:07:58 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2268299 00:35:03.364 16:07:58 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2268299 00:35:03.364 16:07:58 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:03.364 16:07:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2268299 ']' 00:35:03.364 16:07:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.364 16:07:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:03.364 16:07:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.364 16:07:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:03.364 16:07:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:03.622 [2024-12-09 16:07:58.631412] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:35:03.622 [2024-12-09 16:07:58.631461] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:03.622 [2024-12-09 16:07:58.708281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:03.622 [2024-12-09 16:07:58.753491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:03.622 [2024-12-09 16:07:58.753525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:03.622 [2024-12-09 16:07:58.753532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:03.622 [2024-12-09 16:07:58.753538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:03.622 [2024-12-09 16:07:58.753544] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:03.622 [2024-12-09 16:07:58.755120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.622 [2024-12-09 16:07:58.755248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:03.622 [2024-12-09 16:07:58.755308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.622 [2024-12-09 16:07:58.755309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 0000:5f:00.0 ]] 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5f:00.0 ]] 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- scripts/common.sh@324 -- # continue 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:03.881 16:07:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:03.881 ************************************ 00:35:03.881 START TEST spdk_target_abort 00:35:03.881 ************************************ 00:35:03.881 16:07:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:03.881 16:07:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:03.881 16:07:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:03.881 16:07:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.881 16:07:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:07.164 spdk_targetn1 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:07.164 [2024-12-09 16:08:01.780526] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:07.164 [2024-12-09 16:08:01.821595] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:07.164 16:08:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:10.446 Initializing NVMe Controllers 00:35:10.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:10.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:10.446 Initialization complete. Launching workers. 00:35:10.446 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15067, failed: 0 00:35:10.446 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1355, failed to submit 13712 00:35:10.446 success 725, unsuccessful 630, failed 0 00:35:10.446 16:08:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:10.446 16:08:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:13.728 Initializing NVMe Controllers 00:35:13.728 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:13.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:13.728 Initialization complete. Launching workers. 00:35:13.728 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8857, failed: 0 00:35:13.728 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1252, failed to submit 7605 00:35:13.728 success 321, unsuccessful 931, failed 0 00:35:13.728 16:08:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:13.728 16:08:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:17.010 Initializing NVMe Controllers 00:35:17.010 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:17.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:17.010 Initialization complete. Launching workers. 00:35:17.010 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39043, failed: 0 00:35:17.010 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2783, failed to submit 36260 00:35:17.010 success 565, unsuccessful 2218, failed 0 00:35:17.010 16:08:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:17.010 16:08:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.010 16:08:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:17.010 16:08:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.010 16:08:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:17.010 16:08:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.010 16:08:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:17.945 16:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.945 16:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2268299 00:35:17.945 16:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2268299 ']' 00:35:17.945 16:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2268299 00:35:17.945 16:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:17.945 16:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:17.945 16:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2268299 00:35:17.945 16:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:17.945 16:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:17.945 16:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2268299' 00:35:17.945 killing process with pid 2268299 00:35:17.945 16:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2268299 00:35:17.945 16:08:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2268299 00:35:17.945 00:35:17.945 real 0m14.210s 00:35:17.945 user 0m54.147s 00:35:17.945 sys 0m2.673s 00:35:17.945 16:08:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:17.945 16:08:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:17.945 ************************************ 00:35:17.945 END TEST spdk_target_abort 00:35:17.945 ************************************ 00:35:18.204 16:08:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:18.204 16:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:18.204 16:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:18.204 16:08:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:18.204 ************************************ 00:35:18.204 START TEST kernel_target_abort 00:35:18.204 ************************************ 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:18.204 16:08:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:20.737 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:35:20.995 Waiting for block devices as requested 00:35:20.995 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:21.253 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:21.253 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:21.253 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:21.511 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:21.511 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:21.511 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:21.512 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:21.770 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:21.770 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:21.770 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:22.029 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:22.029 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:22.029 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:22.029 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:22.289 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:22.289 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:22.289 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:22.289 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:22.289 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:22.289 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:22.289 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:22.289 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:22.289 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:22.289 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:22.289 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:22.548 No valid GPT data, bailing 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:35:22.548 No valid GPT data, bailing 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n2 ]] 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n2 00:35:22.548 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ host-managed != none ]] 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # continue 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:22.549 00:35:22.549 Discovery Log Number of Records 2, Generation counter 2 00:35:22.549 =====Discovery Log Entry 0====== 00:35:22.549 trtype: tcp 00:35:22.549 adrfam: ipv4 00:35:22.549 subtype: current discovery subsystem 00:35:22.549 treq: not specified, sq flow control disable supported 00:35:22.549 portid: 1 00:35:22.549 trsvcid: 4420 00:35:22.549 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:22.549 traddr: 10.0.0.1 00:35:22.549 eflags: none 00:35:22.549 sectype: none 00:35:22.549 =====Discovery Log Entry 1====== 00:35:22.549 trtype: tcp 00:35:22.549 adrfam: ipv4 00:35:22.549 subtype: nvme subsystem 00:35:22.549 treq: not specified, sq flow control disable supported 00:35:22.549 portid: 1 00:35:22.549 trsvcid: 4420 00:35:22.549 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:22.549 traddr: 10.0.0.1 00:35:22.549 eflags: none 00:35:22.549 sectype: none 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:22.549 16:08:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:25.831 Initializing NVMe Controllers 00:35:25.831 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:25.831 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:25.831 Initialization complete. Launching workers. 00:35:25.831 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 81943, failed: 0 00:35:25.831 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 81943, failed to submit 0 00:35:25.831 success 0, unsuccessful 81943, failed 0 00:35:25.831 16:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:25.831 16:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:29.121 Initializing NVMe Controllers 00:35:29.121 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:29.121 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:29.121 Initialization complete. Launching workers. 00:35:29.121 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 139348, failed: 0 00:35:29.121 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32622, failed to submit 106726 00:35:29.121 success 0, unsuccessful 32622, failed 0 00:35:29.121 16:08:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:29.121 16:08:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:32.483 Initializing NVMe Controllers 00:35:32.483 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:32.483 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:32.483 Initialization complete. Launching workers. 00:35:32.483 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 130812, failed: 0 00:35:32.483 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32698, failed to submit 98114 00:35:32.483 success 0, unsuccessful 32698, failed 0 00:35:32.483 16:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:32.483 16:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:32.483 16:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:32.483 16:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:32.483 16:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:32.483 16:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:32.483 16:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:32.483 16:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:32.483 16:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:32.483 16:08:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:35.017 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:35:35.017 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:35.017 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:35.017 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:35.017 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:35.017 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:35.017 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:35.017 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:35.017 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:35.017 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:35.017 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:35.017 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:35.017 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:35.017 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:35.017 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:35.017 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:35.275 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:35.841 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:36.100 00:35:36.100 real 0m17.959s 00:35:36.100 user 0m8.831s 00:35:36.100 sys 0m5.444s 00:35:36.100 16:08:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:36.100 16:08:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:36.100 ************************************ 00:35:36.100 END TEST kernel_target_abort 00:35:36.100 ************************************ 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:36.100 rmmod nvme_tcp 00:35:36.100 rmmod nvme_fabrics 00:35:36.100 rmmod nvme_keyring 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2268299 ']' 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2268299 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2268299 ']' 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2268299 00:35:36.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2268299) - No such process 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2268299 is not found' 00:35:36.100 Process with pid 2268299 is not found 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:36.100 16:08:31 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:38.634 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:35:38.892 Waiting for block devices as requested 00:35:39.151 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:39.151 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:39.410 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:39.410 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:39.410 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:39.410 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:39.669 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:39.669 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:39.669 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:39.927 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:39.927 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:39.927 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:40.185 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:40.185 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:40.185 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:40.185 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:40.444 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:40.444 16:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:40.444 16:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:40.444 16:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:40.444 16:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:40.444 16:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:40.444 16:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:40.444 16:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:40.444 16:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:40.444 16:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:40.444 16:08:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:40.444 16:08:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:42.977 16:08:37 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:42.977 00:35:42.977 real 0m49.123s 00:35:42.977 user 1m7.489s 00:35:42.977 sys 0m16.996s 00:35:42.977 16:08:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:42.977 16:08:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:42.977 ************************************ 00:35:42.977 END TEST nvmf_abort_qd_sizes 00:35:42.977 ************************************ 00:35:42.977 16:08:37 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:42.977 16:08:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:42.977 16:08:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:42.977 16:08:37 -- common/autotest_common.sh@10 -- # set +x 00:35:42.977 ************************************ 00:35:42.977 START TEST keyring_file 00:35:42.977 ************************************ 00:35:42.977 16:08:37 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:42.977 * Looking for test storage... 00:35:42.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:42.977 16:08:37 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:42.977 16:08:37 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:35:42.977 16:08:37 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:42.977 16:08:37 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:42.977 16:08:37 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:42.978 16:08:37 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:42.978 16:08:37 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:42.978 16:08:37 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:42.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.978 --rc genhtml_branch_coverage=1 00:35:42.978 --rc genhtml_function_coverage=1 00:35:42.978 --rc genhtml_legend=1 00:35:42.978 --rc geninfo_all_blocks=1 00:35:42.978 --rc geninfo_unexecuted_blocks=1 00:35:42.978 00:35:42.978 ' 00:35:42.978 16:08:37 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:42.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.978 --rc genhtml_branch_coverage=1 00:35:42.978 --rc genhtml_function_coverage=1 00:35:42.978 --rc genhtml_legend=1 00:35:42.978 --rc geninfo_all_blocks=1 00:35:42.978 --rc geninfo_unexecuted_blocks=1 00:35:42.978 00:35:42.978 ' 00:35:42.978 16:08:37 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:42.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.978 --rc genhtml_branch_coverage=1 00:35:42.978 --rc genhtml_function_coverage=1 00:35:42.978 --rc genhtml_legend=1 00:35:42.978 --rc geninfo_all_blocks=1 00:35:42.978 --rc geninfo_unexecuted_blocks=1 00:35:42.978 00:35:42.978 ' 00:35:42.978 16:08:37 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:42.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.978 --rc genhtml_branch_coverage=1 00:35:42.978 --rc genhtml_function_coverage=1 00:35:42.978 --rc genhtml_legend=1 00:35:42.978 --rc geninfo_all_blocks=1 00:35:42.978 --rc geninfo_unexecuted_blocks=1 00:35:42.978 00:35:42.978 ' 00:35:42.978 16:08:37 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:42.978 16:08:37 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:42.978 16:08:37 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:42.978 16:08:37 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:42.978 16:08:37 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:42.978 16:08:37 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.978 16:08:37 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.978 16:08:37 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.978 16:08:37 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:42.978 16:08:37 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:42.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:42.978 16:08:37 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:42.978 16:08:37 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:42.978 16:08:37 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:42.978 16:08:37 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:42.978 16:08:37 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:42.978 16:08:37 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.zGGWLgXLoG 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.zGGWLgXLoG 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.zGGWLgXLoG 00:35:42.978 16:08:37 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.zGGWLgXLoG 00:35:42.978 16:08:37 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1MJxBGFEbU 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:42.978 16:08:37 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1MJxBGFEbU 00:35:42.978 16:08:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1MJxBGFEbU 00:35:42.978 16:08:37 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.1MJxBGFEbU 00:35:42.978 16:08:37 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:42.978 16:08:37 keyring_file -- keyring/file.sh@30 -- # tgtpid=2277066 00:35:42.978 16:08:37 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2277066 00:35:42.978 16:08:37 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2277066 ']' 00:35:42.978 16:08:37 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:42.978 16:08:37 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:42.978 16:08:37 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:42.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:42.978 16:08:37 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:42.978 16:08:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:42.978 [2024-12-09 16:08:38.035362] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:35:42.978 [2024-12-09 16:08:38.035412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277066 ] 00:35:42.978 [2024-12-09 16:08:38.108101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.978 [2024-12-09 16:08:38.148365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:43.236 16:08:38 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:43.236 16:08:38 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:43.236 16:08:38 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:43.236 16:08:38 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.236 16:08:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:43.236 [2024-12-09 16:08:38.358121] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:43.236 null0 00:35:43.236 [2024-12-09 16:08:38.390177] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:43.236 [2024-12-09 16:08:38.390452] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:43.236 16:08:38 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.237 16:08:38 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:43.237 [2024-12-09 16:08:38.422263] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:43.237 request: 00:35:43.237 { 00:35:43.237 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:43.237 "secure_channel": false, 00:35:43.237 "listen_address": { 00:35:43.237 "trtype": "tcp", 00:35:43.237 "traddr": "127.0.0.1", 00:35:43.237 "trsvcid": "4420" 00:35:43.237 }, 00:35:43.237 "method": "nvmf_subsystem_add_listener", 00:35:43.237 "req_id": 1 00:35:43.237 } 00:35:43.237 Got JSON-RPC error response 00:35:43.237 response: 00:35:43.237 { 00:35:43.237 "code": -32602, 00:35:43.237 "message": "Invalid parameters" 00:35:43.237 } 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:43.237 16:08:38 keyring_file -- keyring/file.sh@47 -- # bperfpid=2277096 00:35:43.237 16:08:38 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2277096 /var/tmp/bperf.sock 00:35:43.237 16:08:38 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2277096 ']' 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:43.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:43.237 16:08:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:43.494 [2024-12-09 16:08:38.477945] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:35:43.494 [2024-12-09 16:08:38.477988] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277096 ] 00:35:43.494 [2024-12-09 16:08:38.551995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.494 [2024-12-09 16:08:38.594681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:43.494 16:08:38 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:43.494 16:08:38 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:43.494 16:08:38 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zGGWLgXLoG 00:35:43.494 16:08:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zGGWLgXLoG 00:35:43.752 16:08:38 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.1MJxBGFEbU 00:35:43.752 16:08:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.1MJxBGFEbU 00:35:44.010 16:08:39 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:44.010 16:08:39 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:44.010 16:08:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.010 16:08:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:44.010 16:08:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.267 16:08:39 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.zGGWLgXLoG == \/\t\m\p\/\t\m\p\.\z\G\G\W\L\g\X\L\o\G ]] 00:35:44.267 16:08:39 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:44.267 16:08:39 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:44.267 16:08:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.267 16:08:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:44.267 16:08:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.267 16:08:39 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.1MJxBGFEbU == \/\t\m\p\/\t\m\p\.\1\M\J\x\B\G\F\E\b\U ]] 00:35:44.267 16:08:39 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:44.267 16:08:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:44.267 16:08:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:44.267 16:08:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.267 16:08:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.267 16:08:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:44.544 16:08:39 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:44.545 16:08:39 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:44.545 16:08:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:44.545 16:08:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:44.545 16:08:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.545 16:08:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:44.545 16:08:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.803 16:08:39 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:44.803 16:08:39 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:44.803 16:08:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:45.062 [2024-12-09 16:08:40.041516] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:45.063 nvme0n1 00:35:45.063 16:08:40 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:45.063 16:08:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:45.063 16:08:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:45.063 16:08:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.063 16:08:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:45.063 16:08:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.321 16:08:40 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:45.321 16:08:40 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:45.321 16:08:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:45.321 16:08:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:45.321 16:08:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.321 16:08:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:45.321 16:08:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.321 16:08:40 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:45.321 16:08:40 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:45.580 Running I/O for 1 seconds... 00:35:46.516 19404.00 IOPS, 75.80 MiB/s 00:35:46.516 Latency(us) 00:35:46.516 [2024-12-09T15:08:41.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.516 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:46.516 nvme0n1 : 1.00 19449.40 75.97 0.00 0.00 6569.54 4244.24 13481.69 00:35:46.516 [2024-12-09T15:08:41.744Z] =================================================================================================================== 00:35:46.516 [2024-12-09T15:08:41.744Z] Total : 19449.40 75.97 0.00 0.00 6569.54 4244.24 13481.69 00:35:46.516 { 00:35:46.516 "results": [ 00:35:46.516 { 00:35:46.516 "job": "nvme0n1", 00:35:46.516 "core_mask": "0x2", 00:35:46.516 "workload": "randrw", 00:35:46.516 "percentage": 50, 00:35:46.516 "status": "finished", 00:35:46.516 "queue_depth": 128, 00:35:46.516 "io_size": 4096, 00:35:46.516 "runtime": 1.004247, 00:35:46.516 "iops": 19449.398404974076, 00:35:46.516 "mibps": 75.97421251942998, 00:35:46.516 "io_failed": 0, 00:35:46.516 "io_timeout": 0, 00:35:46.516 "avg_latency_us": 6569.544933149995, 00:35:46.516 "min_latency_us": 4244.23619047619, 00:35:46.516 "max_latency_us": 13481.691428571428 00:35:46.516 } 00:35:46.516 ], 00:35:46.516 "core_count": 1 00:35:46.516 } 00:35:46.516 16:08:41 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:46.516 16:08:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:46.776 16:08:41 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:46.776 16:08:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:46.776 16:08:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:46.776 16:08:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.776 16:08:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.776 16:08:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:47.034 16:08:42 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:47.034 16:08:42 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:47.034 16:08:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:47.034 16:08:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.034 16:08:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.034 16:08:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.034 16:08:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:47.034 16:08:42 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:47.034 16:08:42 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:47.034 16:08:42 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:47.034 16:08:42 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:47.034 16:08:42 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:47.034 16:08:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:47.034 16:08:42 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:47.034 16:08:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:47.034 16:08:42 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:47.034 16:08:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:47.293 [2024-12-09 16:08:42.392203] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:47.293 [2024-12-09 16:08:42.392807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1768770 (107): Transport endpoint is not connected 00:35:47.293 [2024-12-09 16:08:42.393803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1768770 (9): Bad file descriptor 00:35:47.293 [2024-12-09 16:08:42.394804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:47.293 [2024-12-09 16:08:42.394824] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:47.293 [2024-12-09 16:08:42.394831] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:47.293 [2024-12-09 16:08:42.394840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:47.293 request: 00:35:47.293 { 00:35:47.293 "name": "nvme0", 00:35:47.293 "trtype": "tcp", 00:35:47.293 "traddr": "127.0.0.1", 00:35:47.293 "adrfam": "ipv4", 00:35:47.293 "trsvcid": "4420", 00:35:47.293 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:47.293 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:47.293 "prchk_reftag": false, 00:35:47.293 "prchk_guard": false, 00:35:47.293 "hdgst": false, 00:35:47.293 "ddgst": false, 00:35:47.293 "psk": "key1", 00:35:47.293 "allow_unrecognized_csi": false, 00:35:47.293 "method": "bdev_nvme_attach_controller", 00:35:47.293 "req_id": 1 00:35:47.293 } 00:35:47.293 Got JSON-RPC error response 00:35:47.293 response: 00:35:47.293 { 00:35:47.293 "code": -5, 00:35:47.293 "message": "Input/output error" 00:35:47.293 } 00:35:47.293 16:08:42 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:47.293 16:08:42 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:47.293 16:08:42 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:47.293 16:08:42 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:47.293 16:08:42 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:47.293 16:08:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:47.293 16:08:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.293 16:08:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.293 16:08:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.293 16:08:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:47.552 16:08:42 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:47.552 16:08:42 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:47.552 16:08:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:47.552 16:08:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.552 16:08:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.552 16:08:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:47.552 16:08:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.811 16:08:42 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:47.811 16:08:42 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:47.811 16:08:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:47.811 16:08:42 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:47.811 16:08:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:48.069 16:08:43 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:48.069 16:08:43 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:48.069 16:08:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.328 16:08:43 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:48.328 16:08:43 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.zGGWLgXLoG 00:35:48.328 16:08:43 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.zGGWLgXLoG 00:35:48.328 16:08:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:48.328 16:08:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.zGGWLgXLoG 00:35:48.328 16:08:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:48.328 16:08:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:48.328 16:08:43 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:48.328 16:08:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:48.328 16:08:43 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zGGWLgXLoG 00:35:48.328 16:08:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zGGWLgXLoG 00:35:48.328 [2024-12-09 16:08:43.546903] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zGGWLgXLoG': 0100660 00:35:48.328 [2024-12-09 16:08:43.546929] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:48.328 request: 00:35:48.328 { 00:35:48.328 "name": "key0", 00:35:48.328 "path": "/tmp/tmp.zGGWLgXLoG", 00:35:48.328 "method": "keyring_file_add_key", 00:35:48.328 "req_id": 1 00:35:48.328 } 00:35:48.328 Got JSON-RPC error response 00:35:48.328 response: 00:35:48.328 { 00:35:48.328 "code": -1, 00:35:48.328 "message": "Operation not permitted" 00:35:48.328 } 00:35:48.587 16:08:43 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:48.587 16:08:43 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:48.587 16:08:43 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:48.587 16:08:43 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:48.587 16:08:43 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.zGGWLgXLoG 00:35:48.587 16:08:43 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zGGWLgXLoG 00:35:48.587 16:08:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zGGWLgXLoG 00:35:48.587 16:08:43 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.zGGWLgXLoG 00:35:48.587 16:08:43 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:48.587 16:08:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:48.587 16:08:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:48.587 16:08:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:48.587 16:08:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:48.587 16:08:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.846 16:08:43 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:48.846 16:08:43 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:48.846 16:08:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:48.846 16:08:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:48.846 16:08:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:48.846 16:08:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:48.846 16:08:43 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:48.846 16:08:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:48.846 16:08:43 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:48.846 16:08:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:49.105 [2024-12-09 16:08:44.156509] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.zGGWLgXLoG': No such file or directory 00:35:49.105 [2024-12-09 16:08:44.156529] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:49.105 [2024-12-09 16:08:44.156544] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:49.105 [2024-12-09 16:08:44.156551] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:49.105 [2024-12-09 16:08:44.156562] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:49.105 [2024-12-09 16:08:44.156568] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:49.105 request: 00:35:49.105 { 00:35:49.105 "name": "nvme0", 00:35:49.105 "trtype": "tcp", 00:35:49.105 "traddr": "127.0.0.1", 00:35:49.105 "adrfam": "ipv4", 00:35:49.105 "trsvcid": "4420", 00:35:49.105 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:49.105 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:49.105 "prchk_reftag": false, 00:35:49.105 "prchk_guard": false, 00:35:49.105 "hdgst": false, 00:35:49.105 "ddgst": false, 00:35:49.105 "psk": "key0", 00:35:49.105 "allow_unrecognized_csi": false, 00:35:49.105 "method": "bdev_nvme_attach_controller", 00:35:49.105 "req_id": 1 00:35:49.105 } 00:35:49.105 Got JSON-RPC error response 00:35:49.105 response: 00:35:49.105 { 00:35:49.105 "code": -19, 00:35:49.105 "message": "No such device" 00:35:49.105 } 00:35:49.105 16:08:44 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:49.105 16:08:44 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:49.105 16:08:44 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:49.105 16:08:44 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:49.105 16:08:44 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:49.105 16:08:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:49.364 16:08:44 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:49.364 16:08:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:49.364 16:08:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:49.364 16:08:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:49.364 16:08:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:49.364 16:08:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:49.364 16:08:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tqTKgVmT3F 00:35:49.364 16:08:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:49.364 16:08:44 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:49.364 16:08:44 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:49.364 16:08:44 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:49.364 16:08:44 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:49.364 16:08:44 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:49.364 16:08:44 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:49.364 16:08:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tqTKgVmT3F 00:35:49.364 16:08:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tqTKgVmT3F 00:35:49.364 16:08:44 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.tqTKgVmT3F 00:35:49.364 16:08:44 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tqTKgVmT3F 00:35:49.364 16:08:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tqTKgVmT3F 00:35:49.623 16:08:44 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:49.623 16:08:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:49.882 nvme0n1 00:35:49.882 16:08:44 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:49.882 16:08:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:49.882 16:08:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:49.882 16:08:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.882 16:08:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.882 16:08:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:50.140 16:08:45 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:50.140 16:08:45 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:50.140 16:08:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:50.140 16:08:45 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:50.140 16:08:45 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:50.140 16:08:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:50.140 16:08:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:50.140 16:08:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.398 16:08:45 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:50.398 16:08:45 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:50.398 16:08:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:50.398 16:08:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:50.398 16:08:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:50.398 16:08:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:50.398 16:08:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.656 16:08:45 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:50.656 16:08:45 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:50.656 16:08:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:50.915 16:08:45 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:50.915 16:08:45 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:50.915 16:08:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.915 16:08:46 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:50.915 16:08:46 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tqTKgVmT3F 00:35:50.915 16:08:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tqTKgVmT3F 00:35:51.173 16:08:46 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.1MJxBGFEbU 00:35:51.173 16:08:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.1MJxBGFEbU 00:35:51.432 16:08:46 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:51.432 16:08:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:51.690 nvme0n1 00:35:51.690 16:08:46 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:51.690 16:08:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:51.950 16:08:46 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:51.950 "subsystems": [ 00:35:51.950 { 00:35:51.950 "subsystem": "keyring", 00:35:51.950 "config": [ 00:35:51.950 { 00:35:51.950 "method": "keyring_file_add_key", 00:35:51.950 "params": { 00:35:51.950 "name": "key0", 00:35:51.950 "path": "/tmp/tmp.tqTKgVmT3F" 00:35:51.950 } 00:35:51.950 }, 00:35:51.950 { 00:35:51.950 "method": "keyring_file_add_key", 00:35:51.950 "params": { 00:35:51.950 "name": "key1", 00:35:51.950 "path": "/tmp/tmp.1MJxBGFEbU" 00:35:51.950 } 00:35:51.950 } 00:35:51.950 ] 00:35:51.950 }, 00:35:51.950 { 00:35:51.950 "subsystem": "iobuf", 00:35:51.950 "config": [ 00:35:51.950 { 00:35:51.950 "method": "iobuf_set_options", 00:35:51.950 "params": { 00:35:51.950 "small_pool_count": 8192, 00:35:51.950 "large_pool_count": 1024, 00:35:51.950 "small_bufsize": 8192, 00:35:51.950 "large_bufsize": 135168, 00:35:51.950 "enable_numa": false 00:35:51.950 } 00:35:51.950 } 00:35:51.950 ] 00:35:51.950 }, 00:35:51.950 { 00:35:51.950 "subsystem": "sock", 00:35:51.950 "config": [ 00:35:51.950 { 00:35:51.950 "method": "sock_set_default_impl", 00:35:51.950 "params": { 00:35:51.950 "impl_name": "posix" 00:35:51.950 } 00:35:51.950 }, 00:35:51.950 { 00:35:51.950 "method": "sock_impl_set_options", 00:35:51.950 "params": { 00:35:51.950 "impl_name": "ssl", 00:35:51.950 "recv_buf_size": 4096, 00:35:51.950 "send_buf_size": 4096, 00:35:51.950 "enable_recv_pipe": true, 00:35:51.950 "enable_quickack": false, 00:35:51.950 "enable_placement_id": 0, 00:35:51.950 "enable_zerocopy_send_server": true, 00:35:51.950 "enable_zerocopy_send_client": false, 00:35:51.950 "zerocopy_threshold": 0, 00:35:51.950 "tls_version": 0, 00:35:51.950 "enable_ktls": false 00:35:51.950 } 00:35:51.950 }, 00:35:51.950 { 00:35:51.950 "method": "sock_impl_set_options", 00:35:51.950 "params": { 00:35:51.950 "impl_name": "posix", 00:35:51.950 "recv_buf_size": 2097152, 00:35:51.950 "send_buf_size": 2097152, 00:35:51.950 "enable_recv_pipe": true, 00:35:51.950 "enable_quickack": false, 00:35:51.950 "enable_placement_id": 0, 00:35:51.950 "enable_zerocopy_send_server": true, 00:35:51.950 "enable_zerocopy_send_client": false, 00:35:51.950 "zerocopy_threshold": 0, 00:35:51.950 "tls_version": 0, 00:35:51.950 "enable_ktls": false 00:35:51.950 } 00:35:51.950 } 00:35:51.950 ] 00:35:51.950 }, 00:35:51.950 { 00:35:51.950 "subsystem": "vmd", 00:35:51.950 "config": [] 00:35:51.950 }, 00:35:51.950 { 00:35:51.950 "subsystem": "accel", 00:35:51.950 "config": [ 00:35:51.950 { 00:35:51.950 "method": "accel_set_options", 00:35:51.950 "params": { 00:35:51.950 "small_cache_size": 128, 00:35:51.950 "large_cache_size": 16, 00:35:51.950 "task_count": 2048, 00:35:51.950 "sequence_count": 2048, 00:35:51.950 "buf_count": 2048 00:35:51.950 } 00:35:51.950 } 00:35:51.950 ] 00:35:51.950 }, 00:35:51.950 { 00:35:51.950 "subsystem": "bdev", 00:35:51.950 "config": [ 00:35:51.950 { 00:35:51.950 "method": "bdev_set_options", 00:35:51.950 "params": { 00:35:51.950 "bdev_io_pool_size": 65535, 00:35:51.950 "bdev_io_cache_size": 256, 00:35:51.950 "bdev_auto_examine": true, 00:35:51.950 "iobuf_small_cache_size": 128, 00:35:51.950 "iobuf_large_cache_size": 16 00:35:51.950 } 00:35:51.950 }, 00:35:51.950 { 00:35:51.950 "method": "bdev_raid_set_options", 00:35:51.950 "params": { 00:35:51.950 "process_window_size_kb": 1024, 00:35:51.950 "process_max_bandwidth_mb_sec": 0 00:35:51.950 } 00:35:51.950 }, 00:35:51.950 { 00:35:51.950 "method": "bdev_iscsi_set_options", 00:35:51.950 "params": { 00:35:51.950 "timeout_sec": 30 00:35:51.950 } 00:35:51.950 }, 00:35:51.950 { 00:35:51.950 "method": "bdev_nvme_set_options", 00:35:51.950 "params": { 00:35:51.950 "action_on_timeout": "none", 00:35:51.950 "timeout_us": 0, 00:35:51.950 "timeout_admin_us": 0, 00:35:51.950 "keep_alive_timeout_ms": 10000, 00:35:51.950 "arbitration_burst": 0, 00:35:51.950 "low_priority_weight": 0, 00:35:51.950 "medium_priority_weight": 0, 00:35:51.950 "high_priority_weight": 0, 00:35:51.950 "nvme_adminq_poll_period_us": 10000, 00:35:51.950 "nvme_ioq_poll_period_us": 0, 00:35:51.950 "io_queue_requests": 512, 00:35:51.950 "delay_cmd_submit": true, 00:35:51.950 "transport_retry_count": 4, 00:35:51.950 "bdev_retry_count": 3, 00:35:51.950 "transport_ack_timeout": 0, 00:35:51.950 "ctrlr_loss_timeout_sec": 0, 00:35:51.950 "reconnect_delay_sec": 0, 00:35:51.950 "fast_io_fail_timeout_sec": 0, 00:35:51.950 "disable_auto_failback": false, 00:35:51.950 "generate_uuids": false, 00:35:51.950 "transport_tos": 0, 00:35:51.950 "nvme_error_stat": false, 00:35:51.950 "rdma_srq_size": 0, 00:35:51.950 "io_path_stat": false, 00:35:51.950 "allow_accel_sequence": false, 00:35:51.950 "rdma_max_cq_size": 0, 00:35:51.950 "rdma_cm_event_timeout_ms": 0, 00:35:51.950 "dhchap_digests": [ 00:35:51.950 "sha256", 00:35:51.950 "sha384", 00:35:51.950 "sha512" 00:35:51.950 ], 00:35:51.950 "dhchap_dhgroups": [ 00:35:51.950 "null", 00:35:51.950 "ffdhe2048", 00:35:51.950 "ffdhe3072", 00:35:51.950 "ffdhe4096", 00:35:51.950 "ffdhe6144", 00:35:51.950 "ffdhe8192" 00:35:51.950 ] 00:35:51.950 } 00:35:51.950 }, 00:35:51.950 { 00:35:51.950 "method": "bdev_nvme_attach_controller", 00:35:51.950 "params": { 00:35:51.950 "name": "nvme0", 00:35:51.950 "trtype": "TCP", 00:35:51.950 "adrfam": "IPv4", 00:35:51.950 "traddr": "127.0.0.1", 00:35:51.950 "trsvcid": "4420", 00:35:51.950 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:51.950 "prchk_reftag": false, 00:35:51.950 "prchk_guard": false, 00:35:51.950 "ctrlr_loss_timeout_sec": 0, 00:35:51.950 "reconnect_delay_sec": 0, 00:35:51.950 "fast_io_fail_timeout_sec": 0, 00:35:51.950 "psk": "key0", 00:35:51.950 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:51.950 "hdgst": false, 00:35:51.950 "ddgst": false, 00:35:51.950 "multipath": "multipath" 00:35:51.950 } 00:35:51.950 }, 00:35:51.950 { 00:35:51.950 "method": "bdev_nvme_set_hotplug", 00:35:51.950 "params": { 00:35:51.950 "period_us": 100000, 00:35:51.950 "enable": false 00:35:51.950 } 00:35:51.950 }, 00:35:51.950 { 00:35:51.950 "method": "bdev_wait_for_examine" 00:35:51.950 } 00:35:51.950 ] 00:35:51.950 }, 00:35:51.950 { 00:35:51.950 "subsystem": "nbd", 00:35:51.950 "config": [] 00:35:51.950 } 00:35:51.950 ] 00:35:51.950 }' 00:35:51.950 16:08:46 keyring_file -- keyring/file.sh@115 -- # killprocess 2277096 00:35:51.950 16:08:46 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2277096 ']' 00:35:51.950 16:08:46 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2277096 00:35:51.950 16:08:46 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:51.950 16:08:46 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:51.950 16:08:46 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2277096 00:35:51.950 16:08:47 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:51.950 16:08:47 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:51.950 16:08:47 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2277096' 00:35:51.950 killing process with pid 2277096 00:35:51.950 16:08:47 keyring_file -- common/autotest_common.sh@973 -- # kill 2277096 00:35:51.950 Received shutdown signal, test time was about 1.000000 seconds 00:35:51.950 00:35:51.950 Latency(us) 00:35:51.950 [2024-12-09T15:08:47.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.950 [2024-12-09T15:08:47.178Z] =================================================================================================================== 00:35:51.950 [2024-12-09T15:08:47.178Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:51.950 16:08:47 keyring_file -- common/autotest_common.sh@978 -- # wait 2277096 00:35:52.209 16:08:47 keyring_file -- keyring/file.sh@118 -- # bperfpid=2278632 00:35:52.209 16:08:47 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2278632 /var/tmp/bperf.sock 00:35:52.209 16:08:47 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2278632 ']' 00:35:52.209 16:08:47 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:52.209 16:08:47 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:52.209 16:08:47 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:52.209 16:08:47 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:52.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:52.209 16:08:47 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:52.209 "subsystems": [ 00:35:52.209 { 00:35:52.209 "subsystem": "keyring", 00:35:52.209 "config": [ 00:35:52.209 { 00:35:52.209 "method": "keyring_file_add_key", 00:35:52.209 "params": { 00:35:52.209 "name": "key0", 00:35:52.209 "path": "/tmp/tmp.tqTKgVmT3F" 00:35:52.209 } 00:35:52.209 }, 00:35:52.209 { 00:35:52.209 "method": "keyring_file_add_key", 00:35:52.209 "params": { 00:35:52.209 "name": "key1", 00:35:52.209 "path": "/tmp/tmp.1MJxBGFEbU" 00:35:52.209 } 00:35:52.209 } 00:35:52.209 ] 00:35:52.209 }, 00:35:52.209 { 00:35:52.209 "subsystem": "iobuf", 00:35:52.209 "config": [ 00:35:52.209 { 00:35:52.209 "method": "iobuf_set_options", 00:35:52.209 "params": { 00:35:52.209 "small_pool_count": 8192, 00:35:52.209 "large_pool_count": 1024, 00:35:52.209 "small_bufsize": 8192, 00:35:52.209 "large_bufsize": 135168, 00:35:52.209 "enable_numa": false 00:35:52.209 } 00:35:52.209 } 00:35:52.209 ] 00:35:52.209 }, 00:35:52.209 { 00:35:52.209 "subsystem": "sock", 00:35:52.209 "config": [ 00:35:52.209 { 00:35:52.209 "method": "sock_set_default_impl", 00:35:52.209 "params": { 00:35:52.209 "impl_name": "posix" 00:35:52.209 } 00:35:52.209 }, 00:35:52.209 { 00:35:52.209 "method": "sock_impl_set_options", 00:35:52.209 "params": { 00:35:52.209 "impl_name": "ssl", 00:35:52.209 "recv_buf_size": 4096, 00:35:52.209 "send_buf_size": 4096, 00:35:52.209 "enable_recv_pipe": true, 00:35:52.209 "enable_quickack": false, 00:35:52.209 "enable_placement_id": 0, 00:35:52.209 "enable_zerocopy_send_server": true, 00:35:52.209 "enable_zerocopy_send_client": false, 00:35:52.209 "zerocopy_threshold": 0, 00:35:52.209 "tls_version": 0, 00:35:52.209 "enable_ktls": false 00:35:52.209 } 00:35:52.209 }, 00:35:52.209 { 00:35:52.210 "method": "sock_impl_set_options", 00:35:52.210 "params": { 00:35:52.210 "impl_name": "posix", 00:35:52.210 "recv_buf_size": 2097152, 00:35:52.210 "send_buf_size": 2097152, 00:35:52.210 "enable_recv_pipe": true, 00:35:52.210 "enable_quickack": false, 00:35:52.210 "enable_placement_id": 0, 00:35:52.210 "enable_zerocopy_send_server": true, 00:35:52.210 "enable_zerocopy_send_client": false, 00:35:52.210 "zerocopy_threshold": 0, 00:35:52.210 "tls_version": 0, 00:35:52.210 "enable_ktls": false 00:35:52.210 } 00:35:52.210 } 00:35:52.210 ] 00:35:52.210 }, 00:35:52.210 { 00:35:52.210 "subsystem": "vmd", 00:35:52.210 "config": [] 00:35:52.210 }, 00:35:52.210 { 00:35:52.210 "subsystem": "accel", 00:35:52.210 "config": [ 00:35:52.210 { 00:35:52.210 "method": "accel_set_options", 00:35:52.210 "params": { 00:35:52.210 "small_cache_size": 128, 00:35:52.210 "large_cache_size": 16, 00:35:52.210 "task_count": 2048, 00:35:52.210 "sequence_count": 2048, 00:35:52.210 "buf_count": 2048 00:35:52.210 } 00:35:52.210 } 00:35:52.210 ] 00:35:52.210 }, 00:35:52.210 { 00:35:52.210 "subsystem": "bdev", 00:35:52.210 "config": [ 00:35:52.210 { 00:35:52.210 "method": "bdev_set_options", 00:35:52.210 "params": { 00:35:52.210 "bdev_io_pool_size": 65535, 00:35:52.210 "bdev_io_cache_size": 256, 00:35:52.210 "bdev_auto_examine": true, 00:35:52.210 "iobuf_small_cache_size": 128, 00:35:52.210 "iobuf_large_cache_size": 16 00:35:52.210 } 00:35:52.210 }, 00:35:52.210 { 00:35:52.210 "method": "bdev_raid_set_options", 00:35:52.210 "params": { 00:35:52.210 "process_window_size_kb": 1024, 00:35:52.210 "process_max_bandwidth_mb_sec": 0 00:35:52.210 } 00:35:52.210 }, 00:35:52.210 { 00:35:52.210 "method": "bdev_iscsi_set_options", 00:35:52.210 "params": { 00:35:52.210 "timeout_sec": 30 00:35:52.210 } 00:35:52.210 }, 00:35:52.210 { 00:35:52.210 "method": "bdev_nvme_set_options", 00:35:52.210 "params": { 00:35:52.210 "action_on_timeout": "none", 00:35:52.210 "timeout_us": 0, 00:35:52.210 "timeout_admin_us": 0, 00:35:52.210 "keep_alive_timeout_ms": 10000, 00:35:52.210 "arbitration_burst": 0, 00:35:52.210 "low_priority_weight": 0, 00:35:52.210 "medium_priority_weight": 0, 00:35:52.210 "high_priority_weight": 0, 00:35:52.210 "nvme_adminq_poll_period_us": 10000, 00:35:52.210 "nvme_ioq_poll_period_us": 0, 00:35:52.210 "io_queue_requests": 512, 00:35:52.210 "delay_cmd_submit": true, 00:35:52.210 "transport_retry_count": 4, 00:35:52.210 "bdev_retry_count": 3, 00:35:52.210 "transport_ack_timeout": 0, 00:35:52.210 "ctrlr_loss_timeout_sec": 0, 00:35:52.210 "reconnect_delay_sec": 0, 00:35:52.210 "fast_io_fail_timeout_sec": 0, 00:35:52.210 "disable_auto_failback": false, 00:35:52.210 "generate_uuids": false, 00:35:52.210 "transport_tos": 0, 00:35:52.210 "nvme_error_stat": false, 00:35:52.210 "rdma_srq_size": 0, 00:35:52.210 "io_path_stat": false, 00:35:52.210 "allow_accel_sequence": false, 00:35:52.210 "rdma_max_cq_size": 0, 00:35:52.210 "rdma_cm_event_timeout_ms": 0, 00:35:52.210 "dhchap_digests": [ 00:35:52.210 "sha256", 00:35:52.210 "sha384", 00:35:52.210 "sha512" 00:35:52.210 ], 00:35:52.210 "dhchap_dhgroups": [ 00:35:52.210 "null", 00:35:52.210 "ffdhe2048", 00:35:52.210 "ffdhe3072", 00:35:52.210 "ffdhe4096", 00:35:52.210 "ffdhe6144", 00:35:52.210 "ffdhe8192" 00:35:52.210 ] 00:35:52.210 } 00:35:52.210 }, 00:35:52.210 { 00:35:52.210 "method": "bdev_nvme_attach_controller", 00:35:52.210 "params": { 00:35:52.210 "name": "nvme0", 00:35:52.210 "trtype": "TCP", 00:35:52.210 "adrfam": "IPv4", 00:35:52.210 "traddr": "127.0.0.1", 00:35:52.210 "trsvcid": "4420", 00:35:52.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:52.210 "prchk_reftag": false, 00:35:52.210 "prchk_guard": false, 00:35:52.210 "ctrlr_loss_timeout_sec": 0, 00:35:52.210 "reconnect_delay_sec": 0, 00:35:52.210 "fast_io_fail_timeout_sec": 0, 00:35:52.210 "psk": "key0", 00:35:52.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:52.210 "hdgst": false, 00:35:52.210 "ddgst": false, 00:35:52.210 "multipath": "multipath" 00:35:52.210 } 00:35:52.210 }, 00:35:52.210 { 00:35:52.210 "method": "bdev_nvme_set_hotplug", 00:35:52.210 "params": { 00:35:52.210 "period_us": 100000, 00:35:52.210 "enable": false 00:35:52.210 } 00:35:52.210 }, 00:35:52.210 { 00:35:52.210 "method": "bdev_wait_for_examine" 00:35:52.210 } 00:35:52.210 ] 00:35:52.210 }, 00:35:52.210 { 00:35:52.210 "subsystem": "nbd", 00:35:52.210 "config": [] 00:35:52.210 } 00:35:52.210 ] 00:35:52.210 }' 00:35:52.210 16:08:47 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:52.210 16:08:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:52.210 [2024-12-09 16:08:47.238249] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:35:52.210 [2024-12-09 16:08:47.238296] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2278632 ] 00:35:52.210 [2024-12-09 16:08:47.313057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.210 [2024-12-09 16:08:47.353658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:52.469 [2024-12-09 16:08:47.514229] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:53.037 16:08:48 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:53.037 16:08:48 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:53.037 16:08:48 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:53.037 16:08:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:53.037 16:08:48 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:53.296 16:08:48 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:53.296 16:08:48 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:53.296 16:08:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:53.296 16:08:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:53.296 16:08:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:53.296 16:08:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:53.296 16:08:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:53.296 16:08:48 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:53.296 16:08:48 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:53.296 16:08:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:53.296 16:08:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:53.296 16:08:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:53.296 16:08:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:53.296 16:08:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:53.555 16:08:48 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:53.555 16:08:48 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:53.555 16:08:48 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:53.555 16:08:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:53.814 16:08:48 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:53.814 16:08:48 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:53.814 16:08:48 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.tqTKgVmT3F /tmp/tmp.1MJxBGFEbU 00:35:53.814 16:08:48 keyring_file -- keyring/file.sh@20 -- # killprocess 2278632 00:35:53.814 16:08:48 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2278632 ']' 00:35:53.814 16:08:48 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2278632 00:35:53.814 16:08:48 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:53.814 16:08:48 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:53.814 16:08:48 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2278632 00:35:53.814 16:08:48 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:53.814 16:08:48 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:53.814 16:08:48 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2278632' 00:35:53.814 killing process with pid 2278632 00:35:53.814 16:08:48 keyring_file -- common/autotest_common.sh@973 -- # kill 2278632 00:35:53.814 Received shutdown signal, test time was about 1.000000 seconds 00:35:53.814 00:35:53.814 Latency(us) 00:35:53.814 [2024-12-09T15:08:49.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:53.814 [2024-12-09T15:08:49.042Z] =================================================================================================================== 00:35:53.814 [2024-12-09T15:08:49.042Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:53.814 16:08:48 keyring_file -- common/autotest_common.sh@978 -- # wait 2278632 00:35:54.073 16:08:49 keyring_file -- keyring/file.sh@21 -- # killprocess 2277066 00:35:54.073 16:08:49 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2277066 ']' 00:35:54.073 16:08:49 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2277066 00:35:54.073 16:08:49 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:54.073 16:08:49 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:54.073 16:08:49 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2277066 00:35:54.073 16:08:49 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:54.073 16:08:49 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:54.073 16:08:49 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2277066' 00:35:54.073 killing process with pid 2277066 00:35:54.073 16:08:49 keyring_file -- common/autotest_common.sh@973 -- # kill 2277066 00:35:54.073 16:08:49 keyring_file -- common/autotest_common.sh@978 -- # wait 2277066 00:35:54.332 00:35:54.332 real 0m11.779s 00:35:54.332 user 0m29.217s 00:35:54.332 sys 0m2.784s 00:35:54.332 16:08:49 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:54.332 16:08:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:54.332 ************************************ 00:35:54.332 END TEST keyring_file 00:35:54.332 ************************************ 00:35:54.332 16:08:49 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:54.332 16:08:49 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:54.332 16:08:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:54.332 16:08:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:54.332 16:08:49 -- common/autotest_common.sh@10 -- # set +x 00:35:54.332 ************************************ 00:35:54.332 START TEST keyring_linux 00:35:54.332 ************************************ 00:35:54.332 16:08:49 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:54.332 Joined session keyring: 1037129508 00:35:54.591 * Looking for test storage... 00:35:54.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:54.591 16:08:49 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:54.591 16:08:49 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:35:54.591 16:08:49 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:54.591 16:08:49 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:54.591 16:08:49 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:54.591 16:08:49 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:54.592 16:08:49 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:54.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.592 --rc genhtml_branch_coverage=1 00:35:54.592 --rc genhtml_function_coverage=1 00:35:54.592 --rc genhtml_legend=1 00:35:54.592 --rc geninfo_all_blocks=1 00:35:54.592 --rc geninfo_unexecuted_blocks=1 00:35:54.592 00:35:54.592 ' 00:35:54.592 16:08:49 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:54.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.592 --rc genhtml_branch_coverage=1 00:35:54.592 --rc genhtml_function_coverage=1 00:35:54.592 --rc genhtml_legend=1 00:35:54.592 --rc geninfo_all_blocks=1 00:35:54.592 --rc geninfo_unexecuted_blocks=1 00:35:54.592 00:35:54.592 ' 00:35:54.592 16:08:49 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:54.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.592 --rc genhtml_branch_coverage=1 00:35:54.592 --rc genhtml_function_coverage=1 00:35:54.592 --rc genhtml_legend=1 00:35:54.592 --rc geninfo_all_blocks=1 00:35:54.592 --rc geninfo_unexecuted_blocks=1 00:35:54.592 00:35:54.592 ' 00:35:54.592 16:08:49 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:54.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.592 --rc genhtml_branch_coverage=1 00:35:54.592 --rc genhtml_function_coverage=1 00:35:54.592 --rc genhtml_legend=1 00:35:54.592 --rc geninfo_all_blocks=1 00:35:54.592 --rc geninfo_unexecuted_blocks=1 00:35:54.592 00:35:54.592 ' 00:35:54.592 16:08:49 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:54.592 16:08:49 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:54.592 16:08:49 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:54.592 16:08:49 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:54.592 16:08:49 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:54.592 16:08:49 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:54.592 16:08:49 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.592 16:08:49 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.592 16:08:49 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.592 16:08:49 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:54.592 16:08:49 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:54.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:54.592 16:08:49 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:54.592 16:08:49 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:54.592 16:08:49 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:54.592 16:08:49 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:54.592 16:08:49 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:54.592 16:08:49 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:54.592 16:08:49 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:54.592 16:08:49 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:54.592 16:08:49 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:54.592 16:08:49 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:54.592 16:08:49 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:54.592 16:08:49 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:54.592 16:08:49 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:54.592 16:08:49 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:54.593 16:08:49 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:54.593 16:08:49 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:54.593 16:08:49 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:54.593 /tmp/:spdk-test:key0 00:35:54.593 16:08:49 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:54.593 16:08:49 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:54.593 16:08:49 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:54.593 16:08:49 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:54.593 16:08:49 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:54.593 16:08:49 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:54.593 16:08:49 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:54.593 16:08:49 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:54.593 16:08:49 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:54.593 16:08:49 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:54.593 16:08:49 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:54.593 16:08:49 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:54.593 16:08:49 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:54.851 16:08:49 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:54.851 16:08:49 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:54.851 /tmp/:spdk-test:key1 00:35:54.851 16:08:49 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2279123 00:35:54.851 16:08:49 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:54.851 16:08:49 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2279123 00:35:54.851 16:08:49 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2279123 ']' 00:35:54.851 16:08:49 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:54.851 16:08:49 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:54.851 16:08:49 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:54.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:54.851 16:08:49 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:54.851 16:08:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:54.851 [2024-12-09 16:08:49.882727] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:35:54.851 [2024-12-09 16:08:49.882774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279123 ] 00:35:54.851 [2024-12-09 16:08:49.938106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.851 [2024-12-09 16:08:49.978823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:55.110 16:08:50 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:55.110 16:08:50 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:55.110 16:08:50 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:55.110 16:08:50 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.110 16:08:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:55.110 [2024-12-09 16:08:50.201353] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:55.110 null0 00:35:55.110 [2024-12-09 16:08:50.233404] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:55.110 [2024-12-09 16:08:50.233693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:55.110 16:08:50 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.110 16:08:50 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:55.110 271393557 00:35:55.110 16:08:50 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:55.110 326758447 00:35:55.110 16:08:50 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2279270 00:35:55.110 16:08:50 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2279270 /var/tmp/bperf.sock 00:35:55.110 16:08:50 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:55.110 16:08:50 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2279270 ']' 00:35:55.110 16:08:50 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:55.110 16:08:50 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:55.110 16:08:50 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:55.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:55.110 16:08:50 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:55.110 16:08:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:55.110 [2024-12-09 16:08:50.306099] Starting SPDK v25.01-pre git sha1 b8248e28c / DPDK 24.03.0 initialization... 00:35:55.110 [2024-12-09 16:08:50.306143] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279270 ] 00:35:55.369 [2024-12-09 16:08:50.380182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.369 [2024-12-09 16:08:50.420905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:55.369 16:08:50 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:55.369 16:08:50 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:55.369 16:08:50 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:55.369 16:08:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:55.628 16:08:50 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:55.628 16:08:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:55.887 16:08:50 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:55.887 16:08:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:55.887 [2024-12-09 16:08:51.090403] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:56.145 nvme0n1 00:35:56.145 16:08:51 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:56.145 16:08:51 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:56.145 16:08:51 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:56.145 16:08:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:56.145 16:08:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:56.145 16:08:51 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:56.403 16:08:51 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:56.403 16:08:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:56.403 16:08:51 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:56.403 16:08:51 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:56.403 16:08:51 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:56.403 16:08:51 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:56.403 16:08:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:56.403 16:08:51 keyring_linux -- keyring/linux.sh@25 -- # sn=271393557 00:35:56.403 16:08:51 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:56.403 16:08:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:56.403 16:08:51 keyring_linux -- keyring/linux.sh@26 -- # [[ 271393557 == \2\7\1\3\9\3\5\5\7 ]] 00:35:56.403 16:08:51 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 271393557 00:35:56.403 16:08:51 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:56.403 16:08:51 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:56.662 Running I/O for 1 seconds... 00:35:57.597 21966.00 IOPS, 85.80 MiB/s 00:35:57.597 Latency(us) 00:35:57.597 [2024-12-09T15:08:52.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.597 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:57.597 nvme0n1 : 1.01 21961.38 85.79 0.00 0.00 5808.53 1934.87 7084.13 00:35:57.597 [2024-12-09T15:08:52.825Z] =================================================================================================================== 00:35:57.597 [2024-12-09T15:08:52.825Z] Total : 21961.38 85.79 0.00 0.00 5808.53 1934.87 7084.13 00:35:57.597 { 00:35:57.597 "results": [ 00:35:57.597 { 00:35:57.597 "job": "nvme0n1", 00:35:57.597 "core_mask": "0x2", 00:35:57.597 "workload": "randread", 00:35:57.597 "status": "finished", 00:35:57.597 "queue_depth": 128, 00:35:57.597 "io_size": 4096, 00:35:57.597 "runtime": 1.006039, 00:35:57.597 "iops": 21961.375254836046, 00:35:57.597 "mibps": 85.7866220892033, 00:35:57.597 "io_failed": 0, 00:35:57.597 "io_timeout": 0, 00:35:57.597 "avg_latency_us": 5808.525510826038, 00:35:57.597 "min_latency_us": 1934.872380952381, 00:35:57.597 "max_latency_us": 7084.129523809524 00:35:57.597 } 00:35:57.597 ], 00:35:57.597 "core_count": 1 00:35:57.597 } 00:35:57.597 16:08:52 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:57.597 16:08:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:57.855 16:08:52 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:57.855 16:08:52 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:57.855 16:08:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:57.855 16:08:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:57.855 16:08:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:57.855 16:08:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:58.114 16:08:53 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:58.114 16:08:53 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:58.114 16:08:53 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:58.114 16:08:53 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:58.114 16:08:53 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:58.114 16:08:53 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:58.114 16:08:53 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:58.114 16:08:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:58.114 [2024-12-09 16:08:53.292028] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:58.114 [2024-12-09 16:08:53.292383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f87500 (107): Transport endpoint is not connected 00:35:58.114 [2024-12-09 16:08:53.293378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f87500 (9): Bad file descriptor 00:35:58.114 [2024-12-09 16:08:53.294379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:58.114 [2024-12-09 16:08:53.294391] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:58.114 [2024-12-09 16:08:53.294398] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:58.114 [2024-12-09 16:08:53.294407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:58.114 request: 00:35:58.114 { 00:35:58.114 "name": "nvme0", 00:35:58.114 "trtype": "tcp", 00:35:58.114 "traddr": "127.0.0.1", 00:35:58.114 "adrfam": "ipv4", 00:35:58.114 "trsvcid": "4420", 00:35:58.114 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:58.114 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:58.114 "prchk_reftag": false, 00:35:58.114 "prchk_guard": false, 00:35:58.114 "hdgst": false, 00:35:58.114 "ddgst": false, 00:35:58.114 "psk": ":spdk-test:key1", 00:35:58.114 "allow_unrecognized_csi": false, 00:35:58.114 "method": "bdev_nvme_attach_controller", 00:35:58.114 "req_id": 1 00:35:58.114 } 00:35:58.114 Got JSON-RPC error response 00:35:58.114 response: 00:35:58.114 { 00:35:58.114 "code": -5, 00:35:58.114 "message": "Input/output error" 00:35:58.114 } 00:35:58.114 16:08:53 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:58.114 16:08:53 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:58.114 16:08:53 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:58.114 16:08:53 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@33 -- # sn=271393557 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 271393557 00:35:58.114 1 links removed 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@33 -- # sn=326758447 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 326758447 00:35:58.114 1 links removed 00:35:58.114 16:08:53 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2279270 00:35:58.114 16:08:53 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2279270 ']' 00:35:58.114 16:08:53 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2279270 00:35:58.114 16:08:53 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:58.114 16:08:53 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:58.114 16:08:53 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2279270 00:35:58.372 16:08:53 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:58.372 16:08:53 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:58.373 16:08:53 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2279270' 00:35:58.373 killing process with pid 2279270 00:35:58.373 16:08:53 keyring_linux -- common/autotest_common.sh@973 -- # kill 2279270 00:35:58.373 Received shutdown signal, test time was about 1.000000 seconds 00:35:58.373 00:35:58.373 Latency(us) 00:35:58.373 [2024-12-09T15:08:53.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.373 [2024-12-09T15:08:53.601Z] =================================================================================================================== 00:35:58.373 [2024-12-09T15:08:53.601Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:58.373 16:08:53 keyring_linux -- common/autotest_common.sh@978 -- # wait 2279270 00:35:58.373 16:08:53 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2279123 00:35:58.373 16:08:53 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2279123 ']' 00:35:58.373 16:08:53 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2279123 00:35:58.373 16:08:53 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:58.373 16:08:53 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:58.373 16:08:53 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2279123 00:35:58.373 16:08:53 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:58.373 16:08:53 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:58.373 16:08:53 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2279123' 00:35:58.373 killing process with pid 2279123 00:35:58.373 16:08:53 keyring_linux -- common/autotest_common.sh@973 -- # kill 2279123 00:35:58.373 16:08:53 keyring_linux -- common/autotest_common.sh@978 -- # wait 2279123 00:35:58.940 00:35:58.940 real 0m4.355s 00:35:58.940 user 0m8.288s 00:35:58.940 sys 0m1.414s 00:35:58.940 16:08:53 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:58.940 16:08:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:58.940 ************************************ 00:35:58.940 END TEST keyring_linux 00:35:58.940 ************************************ 00:35:58.940 16:08:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:58.940 16:08:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:58.940 16:08:53 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:58.940 16:08:53 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:58.940 16:08:53 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:58.940 16:08:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:58.940 16:08:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:58.940 16:08:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:58.940 16:08:53 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:58.940 16:08:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:58.940 16:08:53 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:58.940 16:08:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:58.940 16:08:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:58.940 16:08:53 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:58.940 16:08:53 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:58.940 16:08:53 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:58.940 16:08:53 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:58.940 16:08:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:58.940 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:35:58.940 16:08:53 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:58.940 16:08:53 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:58.940 16:08:53 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:58.940 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:36:04.210 INFO: APP EXITING 00:36:04.210 INFO: killing all VMs 00:36:04.210 INFO: killing vhost app 00:36:04.210 INFO: EXIT DONE 00:36:06.745 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:36:07.004 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:07.004 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:07.004 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:07.004 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:07.004 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:07.004 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:07.263 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:07.263 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:07.263 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:07.263 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:07.263 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:07.263 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:07.263 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:07.263 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:07.263 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:07.263 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:07.263 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:10.554 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:36:10.554 Cleaning 00:36:10.554 Removing: /var/run/dpdk/spdk0/config 00:36:10.554 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:10.554 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:10.554 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:10.554 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:10.554 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:10.554 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:10.554 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:10.554 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:10.554 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:10.554 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:10.554 Removing: /var/run/dpdk/spdk1/config 00:36:10.554 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:10.554 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:10.554 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:10.554 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:10.554 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:10.554 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:10.554 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:10.554 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:10.554 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:10.554 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:10.554 Removing: /var/run/dpdk/spdk2/config 00:36:10.554 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:10.554 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:10.554 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:10.554 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:10.554 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:10.554 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:10.554 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:10.554 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:10.554 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:10.554 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:10.554 Removing: /var/run/dpdk/spdk3/config 00:36:10.554 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:10.554 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:10.554 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:10.554 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:10.554 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:10.554 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:10.554 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:10.554 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:10.554 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:10.554 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:10.554 Removing: /var/run/dpdk/spdk4/config 00:36:10.554 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:10.554 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:10.554 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:10.554 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:10.554 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:10.554 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:10.554 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:10.554 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:10.554 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:10.554 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:10.554 Removing: /dev/shm/bdev_svc_trace.1 00:36:10.554 Removing: /dev/shm/nvmf_trace.0 00:36:10.554 Removing: /dev/shm/spdk_tgt_trace.pid1803350 00:36:10.554 Removing: /var/run/dpdk/spdk0 00:36:10.554 Removing: /var/run/dpdk/spdk1 00:36:10.554 Removing: /var/run/dpdk/spdk2 00:36:10.554 Removing: /var/run/dpdk/spdk3 00:36:10.554 Removing: /var/run/dpdk/spdk4 00:36:10.554 Removing: /var/run/dpdk/spdk_pid1801140 00:36:10.554 Removing: /var/run/dpdk/spdk_pid1802285 00:36:10.554 Removing: /var/run/dpdk/spdk_pid1803350 00:36:10.554 Removing: /var/run/dpdk/spdk_pid1803978 00:36:10.554 Removing: /var/run/dpdk/spdk_pid1804918 00:36:10.554 Removing: /var/run/dpdk/spdk_pid1804935 00:36:10.554 Removing: /var/run/dpdk/spdk_pid1805942 00:36:10.554 Removing: /var/run/dpdk/spdk_pid1806117 00:36:10.554 Removing: /var/run/dpdk/spdk_pid1806392 00:36:10.554 Removing: /var/run/dpdk/spdk_pid1807978 00:36:10.554 Removing: /var/run/dpdk/spdk_pid1809242 00:36:10.554 Removing: /var/run/dpdk/spdk_pid1809576 00:36:10.554 Removing: /var/run/dpdk/spdk_pid1809819 00:36:10.813 Removing: /var/run/dpdk/spdk_pid1810118 00:36:10.813 Removing: /var/run/dpdk/spdk_pid1810407 00:36:10.813 Removing: /var/run/dpdk/spdk_pid1810657 00:36:10.813 Removing: /var/run/dpdk/spdk_pid1810902 00:36:10.813 Removing: /var/run/dpdk/spdk_pid1811185 00:36:10.813 Removing: /var/run/dpdk/spdk_pid1811921 00:36:10.813 Removing: /var/run/dpdk/spdk_pid1814881 00:36:10.813 Removing: /var/run/dpdk/spdk_pid1815145 00:36:10.813 Removing: /var/run/dpdk/spdk_pid1815397 00:36:10.813 Removing: /var/run/dpdk/spdk_pid1815504 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1815891 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1816014 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1816385 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1816582 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1816793 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1816866 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1817119 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1817137 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1817691 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1817911 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1818233 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1821901 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1826126 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1836763 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1837443 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1841679 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1841937 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1846159 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1851970 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1854555 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1864874 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1873712 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1875518 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1876553 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1893865 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1897888 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1943804 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1948974 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1954718 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1961092 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1961096 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1961999 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1962899 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1963800 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1964265 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1964275 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1964513 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1964724 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1964728 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1965629 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1966529 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1967343 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1967900 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1968033 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1968346 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1969360 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1970461 00:36:10.814 Removing: /var/run/dpdk/spdk_pid1979049 00:36:10.814 Removing: /var/run/dpdk/spdk_pid2007828 00:36:10.814 Removing: /var/run/dpdk/spdk_pid2012277 00:36:10.814 Removing: /var/run/dpdk/spdk_pid2013877 00:36:10.814 Removing: /var/run/dpdk/spdk_pid2015680 00:36:10.814 Removing: /var/run/dpdk/spdk_pid2015702 00:36:10.814 Removing: /var/run/dpdk/spdk_pid2015931 00:36:10.814 Removing: /var/run/dpdk/spdk_pid2016075 00:36:10.814 Removing: /var/run/dpdk/spdk_pid2016553 00:36:10.814 Removing: /var/run/dpdk/spdk_pid2018263 00:36:10.814 Removing: /var/run/dpdk/spdk_pid2019140 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2019518 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2021794 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2022274 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2022776 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2027005 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2032567 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2032568 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2032569 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2036308 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2044897 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2049140 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2055448 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2056731 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2058042 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2059579 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2064066 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2068397 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2072320 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2079830 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2079832 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2084380 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2084577 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2084850 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2085187 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2085200 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2089641 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2090198 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2094629 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2097218 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2103065 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2108343 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2117031 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2124113 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2124175 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2142910 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2143451 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2144057 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2144525 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2145252 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2145881 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2146533 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2147376 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2151593 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2151821 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2157771 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2157886 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2163313 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2167398 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2177125 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2177596 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2181869 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2182273 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2186265 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2192056 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2195120 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2204947 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2213539 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2215232 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2216053 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2232202 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2235992 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2238664 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2246824 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2246864 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2252047 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2253914 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2255850 00:36:11.073 Removing: /var/run/dpdk/spdk_pid2256985 00:36:11.332 Removing: /var/run/dpdk/spdk_pid2258928 00:36:11.332 Removing: /var/run/dpdk/spdk_pid2260122 00:36:11.332 Removing: /var/run/dpdk/spdk_pid2268902 00:36:11.332 Removing: /var/run/dpdk/spdk_pid2269355 00:36:11.332 Removing: /var/run/dpdk/spdk_pid2269809 00:36:11.332 Removing: /var/run/dpdk/spdk_pid2272291 00:36:11.332 Removing: /var/run/dpdk/spdk_pid2272753 00:36:11.332 Removing: /var/run/dpdk/spdk_pid2273211 00:36:11.332 Removing: /var/run/dpdk/spdk_pid2277066 00:36:11.332 Removing: /var/run/dpdk/spdk_pid2277096 00:36:11.332 Removing: /var/run/dpdk/spdk_pid2278632 00:36:11.332 Removing: /var/run/dpdk/spdk_pid2279123 00:36:11.332 Removing: /var/run/dpdk/spdk_pid2279270 00:36:11.332 Clean 00:36:11.332 16:09:06 -- common/autotest_common.sh@1453 -- # return 0 00:36:11.332 16:09:06 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:11.332 16:09:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:11.332 16:09:06 -- common/autotest_common.sh@10 -- # set +x 00:36:11.332 16:09:06 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:11.332 16:09:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:11.332 16:09:06 -- common/autotest_common.sh@10 -- # set +x 00:36:11.332 16:09:06 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:11.332 16:09:06 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:11.332 16:09:06 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:11.332 16:09:06 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:11.332 16:09:06 -- spdk/autotest.sh@398 -- # hostname 00:36:11.332 16:09:06 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-03 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:11.591 geninfo: WARNING: invalid characters removed from testname! 00:36:33.527 16:09:27 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:35.065 16:09:29 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:36.968 16:09:31 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:38.872 16:09:33 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:40.248 16:09:35 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:42.151 16:09:37 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:44.056 16:09:39 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:44.056 16:09:39 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:44.056 16:09:39 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:44.056 16:09:39 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:44.056 16:09:39 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:44.056 16:09:39 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:44.056 + [[ -n 1722262 ]] 00:36:44.056 + sudo kill 1722262 00:36:44.066 [Pipeline] } 00:36:44.082 [Pipeline] // stage 00:36:44.087 [Pipeline] } 00:36:44.101 [Pipeline] // timeout 00:36:44.107 [Pipeline] } 00:36:44.121 [Pipeline] // catchError 00:36:44.126 [Pipeline] } 00:36:44.140 [Pipeline] // wrap 00:36:44.146 [Pipeline] } 00:36:44.159 [Pipeline] // catchError 00:36:44.169 [Pipeline] stage 00:36:44.171 [Pipeline] { (Epilogue) 00:36:44.185 [Pipeline] catchError 00:36:44.186 [Pipeline] { 00:36:44.199 [Pipeline] echo 00:36:44.201 Cleanup processes 00:36:44.206 [Pipeline] sh 00:36:44.493 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:44.493 2290766 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:44.506 [Pipeline] sh 00:36:44.791 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:44.791 ++ grep -v 'sudo pgrep' 00:36:44.791 ++ awk '{print $1}' 00:36:44.791 + sudo kill -9 00:36:44.791 + true 00:36:44.802 [Pipeline] sh 00:36:45.087 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:57.300 [Pipeline] sh 00:36:57.586 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:57.586 Artifacts sizes are good 00:36:57.600 [Pipeline] archiveArtifacts 00:36:57.608 Archiving artifacts 00:36:57.743 [Pipeline] sh 00:36:58.028 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:58.041 [Pipeline] cleanWs 00:36:58.051 [WS-CLEANUP] Deleting project workspace... 00:36:58.051 [WS-CLEANUP] Deferred wipeout is used... 00:36:58.058 [WS-CLEANUP] done 00:36:58.060 [Pipeline] } 00:36:58.075 [Pipeline] // catchError 00:36:58.087 [Pipeline] sh 00:36:58.405 + logger -p user.info -t JENKINS-CI 00:36:58.414 [Pipeline] } 00:36:58.427 [Pipeline] // stage 00:36:58.432 [Pipeline] } 00:36:58.446 [Pipeline] // node 00:36:58.451 [Pipeline] End of Pipeline 00:36:58.492 Finished: SUCCESS